00:00:00.001 Started by upstream project "autotest-per-patch" build number 122918 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.141 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.141 The recommended git tool is: git 00:00:00.142 using credential 00000000-0000-0000-0000-000000000002 00:00:00.143 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.902 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.914 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.925 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.925 > git config core.sparsecheckout # timeout=10 00:00:04.938 > git read-tree -mu HEAD # timeout=10 00:00:04.953 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.973 Commit message: "inventory/dev: add missing long names" 00:00:04.973 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.055 [Pipeline] Start of Pipeline 00:00:05.070 [Pipeline] library 00:00:05.072 Loading library shm_lib@master 00:00:05.073 Library shm_lib@master is cached. Copying from home. 00:00:05.087 [Pipeline] node 00:00:20.091 Still waiting to schedule task 00:00:20.092 Waiting for next available executor on ‘vagrant-vm-host’ 00:09:05.091 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:05.092 [Pipeline] { 00:09:05.101 [Pipeline] catchError 00:09:05.102 [Pipeline] { 00:09:05.112 [Pipeline] wrap 00:09:05.118 [Pipeline] { 00:09:05.127 [Pipeline] stage 00:09:05.130 [Pipeline] { (Prologue) 00:09:05.147 [Pipeline] echo 00:09:05.149 Node: VM-host-WFP1 00:09:05.155 [Pipeline] cleanWs 00:09:05.163 [WS-CLEANUP] Deleting project workspace... 00:09:05.163 [WS-CLEANUP] Deferred wipeout is used... 00:09:05.168 [WS-CLEANUP] done 00:09:05.336 [Pipeline] setCustomBuildProperty 00:09:05.406 [Pipeline] nodesByLabel 00:09:05.408 Found a total of 1 nodes with the 'sorcerer' label 00:09:05.416 [Pipeline] httpRequest 00:09:05.428 HttpMethod: GET 00:09:05.428 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:09:05.429 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:09:05.432 Response Code: HTTP/1.1 200 OK 00:09:05.432 Success: Status code 200 is in the accepted range: 200,404 00:09:05.433 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:09:05.706 [Pipeline] sh 00:09:05.990 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:09:06.010 [Pipeline] httpRequest 00:09:06.014 HttpMethod: GET 00:09:06.015 URL: http://10.211.164.101/packages/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:09:06.016 Sending request to url: http://10.211.164.101/packages/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:09:06.018 Response Code: HTTP/1.1 200 OK 00:09:06.019 Success: Status code 200 is in the accepted range: 200,404 00:09:06.019 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:09:09.616 [Pipeline] sh 00:09:09.894 + tar --no-same-owner -xf spdk_c3870302ff258b8c5f594a7c860b8d3e6c2d503d.tar.gz 00:09:12.434 [Pipeline] sh 00:09:12.714 + git -C spdk log --oneline -n5 00:09:12.714 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:09:12.714 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:09:12.714 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:09:12.714 4506c0c36 test/common: Enable inherit_errexit 00:09:12.714 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:09:12.736 [Pipeline] writeFile 00:09:12.755 [Pipeline] sh 00:09:13.036 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:09:13.047 [Pipeline] sh 00:09:13.328 + cat autorun-spdk.conf 00:09:13.328 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:13.328 SPDK_TEST_NVMF=1 00:09:13.328 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:13.328 SPDK_TEST_URING=1 00:09:13.328 SPDK_TEST_USDT=1 00:09:13.328 SPDK_RUN_UBSAN=1 00:09:13.328 NET_TYPE=virt 00:09:13.328 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:13.335 RUN_NIGHTLY=0 00:09:13.337 [Pipeline] } 00:09:13.356 [Pipeline] // stage 00:09:13.371 [Pipeline] stage 00:09:13.372 [Pipeline] { (Run VM) 00:09:13.386 [Pipeline] sh 00:09:13.673 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:09:13.673 + echo 'Start stage prepare_nvme.sh' 00:09:13.673 Start stage prepare_nvme.sh 00:09:13.673 + [[ -n 3 ]] 00:09:13.673 + disk_prefix=ex3 00:09:13.673 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:09:13.673 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:09:13.673 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:09:13.673 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:13.673 ++ SPDK_TEST_NVMF=1 00:09:13.673 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:13.673 ++ SPDK_TEST_URING=1 00:09:13.673 ++ SPDK_TEST_USDT=1 00:09:13.673 ++ SPDK_RUN_UBSAN=1 00:09:13.673 ++ NET_TYPE=virt 00:09:13.673 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:13.673 ++ RUN_NIGHTLY=0 00:09:13.673 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:13.673 + nvme_files=() 00:09:13.673 + declare -A nvme_files 00:09:13.673 + backend_dir=/var/lib/libvirt/images/backends 00:09:13.673 + nvme_files['nvme.img']=5G 00:09:13.673 + nvme_files['nvme-cmb.img']=5G 00:09:13.673 + nvme_files['nvme-multi0.img']=4G 00:09:13.673 + nvme_files['nvme-multi1.img']=4G 00:09:13.673 + nvme_files['nvme-multi2.img']=4G 00:09:13.673 + nvme_files['nvme-openstack.img']=8G 00:09:13.673 + nvme_files['nvme-zns.img']=5G 00:09:13.673 + (( SPDK_TEST_NVME_PMR == 1 )) 00:09:13.673 + (( SPDK_TEST_FTL == 1 )) 00:09:13.673 + (( SPDK_TEST_NVME_FDP == 1 )) 00:09:13.673 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:09:13.673 + for nvme in "${!nvme_files[@]}" 00:09:13.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:09:13.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:09:13.673 + for nvme in "${!nvme_files[@]}" 00:09:13.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:09:13.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:09:13.673 + for nvme in "${!nvme_files[@]}" 00:09:13.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:09:13.932 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:09:13.932 + for nvme in "${!nvme_files[@]}" 00:09:13.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:09:13.932 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:09:13.932 + for nvme in "${!nvme_files[@]}" 00:09:13.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:09:13.932 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:09:13.932 + for nvme in "${!nvme_files[@]}" 00:09:13.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:09:13.932 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:09:13.932 + for nvme in "${!nvme_files[@]}" 00:09:13.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:09:14.190 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:09:14.190 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:09:14.191 + echo 'End stage prepare_nvme.sh' 00:09:14.191 End stage prepare_nvme.sh 00:09:14.202 [Pipeline] sh 00:09:14.483 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:09:14.483 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:09:14.483 00:09:14.483 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:09:14.483 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:09:14.483 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:14.483 HELP=0 00:09:14.483 DRY_RUN=0 00:09:14.483 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:09:14.483 NVME_DISKS_TYPE=nvme,nvme, 00:09:14.483 NVME_AUTO_CREATE=0 00:09:14.483 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:09:14.483 NVME_CMB=,, 00:09:14.483 NVME_PMR=,, 00:09:14.483 NVME_ZNS=,, 00:09:14.483 NVME_MS=,, 00:09:14.483 NVME_FDP=,, 00:09:14.483 SPDK_VAGRANT_DISTRO=fedora38 00:09:14.483 SPDK_VAGRANT_VMCPU=10 00:09:14.483 SPDK_VAGRANT_VMRAM=12288 00:09:14.483 SPDK_VAGRANT_PROVIDER=libvirt 00:09:14.483 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:09:14.483 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:09:14.483 SPDK_OPENSTACK_NETWORK=0 00:09:14.483 VAGRANT_PACKAGE_BOX=0 00:09:14.483 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:09:14.483 FORCE_DISTRO=true 00:09:14.483 VAGRANT_BOX_VERSION= 00:09:14.483 EXTRA_VAGRANTFILES= 00:09:14.483 NIC_MODEL=e1000 00:09:14.483 00:09:14.483 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:09:14.483 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:17.011 Bringing machine 'default' up with 'libvirt' provider... 00:09:17.946 ==> default: Creating image (snapshot of base box volume). 00:09:18.204 ==> default: Creating domain with the following settings... 00:09:18.204 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715780835_93e17dcd88324954c677 00:09:18.204 ==> default: -- Domain type: kvm 00:09:18.204 ==> default: -- Cpus: 10 00:09:18.204 ==> default: -- Feature: acpi 00:09:18.204 ==> default: -- Feature: apic 00:09:18.204 ==> default: -- Feature: pae 00:09:18.204 ==> default: -- Memory: 12288M 00:09:18.204 ==> default: -- Memory Backing: hugepages: 00:09:18.204 ==> default: -- Management MAC: 00:09:18.204 ==> default: -- Loader: 00:09:18.204 ==> default: -- Nvram: 00:09:18.204 ==> default: -- Base box: spdk/fedora38 00:09:18.204 ==> default: -- Storage pool: default 00:09:18.204 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715780835_93e17dcd88324954c677.img (20G) 00:09:18.204 ==> default: -- Volume Cache: default 00:09:18.204 ==> default: -- Kernel: 00:09:18.204 ==> default: -- Initrd: 00:09:18.204 ==> default: -- Graphics Type: vnc 00:09:18.204 ==> default: -- Graphics Port: -1 00:09:18.204 ==> default: -- Graphics IP: 127.0.0.1 00:09:18.204 ==> default: -- Graphics Password: Not defined 00:09:18.204 ==> default: -- Video Type: cirrus 00:09:18.204 ==> default: -- Video VRAM: 9216 00:09:18.204 ==> default: -- Sound Type: 00:09:18.204 ==> default: -- Keymap: en-us 00:09:18.204 ==> default: -- TPM Path: 00:09:18.204 ==> default: -- INPUT: type=mouse, bus=ps2 00:09:18.204 ==> default: -- Command line args: 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:09:18.204 ==> default: -> value=-drive, 00:09:18.204 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:09:18.204 ==> default: -> value=-drive, 00:09:18.204 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:18.204 ==> default: -> value=-drive, 00:09:18.204 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:18.204 ==> default: -> value=-drive, 00:09:18.204 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:09:18.204 ==> default: -> value=-device, 00:09:18.204 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:18.771 ==> default: Creating shared folders metadata... 00:09:18.771 ==> default: Starting domain. 00:09:20.673 ==> default: Waiting for domain to get an IP address... 00:09:38.754 ==> default: Waiting for SSH to become available... 00:09:39.690 ==> default: Configuring and enabling network interfaces... 00:09:44.987 default: SSH address: 192.168.121.41:22 00:09:44.987 default: SSH username: vagrant 00:09:44.987 default: SSH auth method: private key 00:09:47.539 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:55.653 ==> default: Mounting SSHFS shared folder... 00:09:58.178 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:58.178 ==> default: Checking Mount.. 00:09:59.589 ==> default: Folder Successfully Mounted! 00:09:59.589 ==> default: Running provisioner: file... 00:10:00.526 default: ~/.gitconfig => .gitconfig 00:10:01.462 00:10:01.462 SUCCESS! 00:10:01.462 00:10:01.462 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:10:01.462 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:10:01.462 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:10:01.462 00:10:01.473 [Pipeline] } 00:10:01.492 [Pipeline] // stage 00:10:01.501 [Pipeline] dir 00:10:01.502 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:10:01.504 [Pipeline] { 00:10:01.518 [Pipeline] catchError 00:10:01.519 [Pipeline] { 00:10:01.534 [Pipeline] sh 00:10:01.815 + vagrant ssh-config --host vagrant 00:10:01.815 + sed -ne /^Host/,$p 00:10:01.815 + tee ssh_conf 00:10:05.161 Host vagrant 00:10:05.161 HostName 192.168.121.41 00:10:05.161 User vagrant 00:10:05.161 Port 22 00:10:05.161 UserKnownHostsFile /dev/null 00:10:05.161 StrictHostKeyChecking no 00:10:05.161 PasswordAuthentication no 00:10:05.161 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:10:05.161 IdentitiesOnly yes 00:10:05.161 LogLevel FATAL 00:10:05.161 ForwardAgent yes 00:10:05.161 ForwardX11 yes 00:10:05.161 00:10:05.174 [Pipeline] withEnv 00:10:05.177 [Pipeline] { 00:10:05.192 [Pipeline] sh 00:10:05.473 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:10:05.473 source /etc/os-release 00:10:05.473 [[ -e /image.version ]] && img=$(< /image.version) 00:10:05.473 # Minimal, systemd-like check. 00:10:05.473 if [[ -e /.dockerenv ]]; then 00:10:05.473 # Clear garbage from the node's name: 00:10:05.473 # agt-er_autotest_547-896 -> autotest_547-896 00:10:05.473 # $HOSTNAME is the actual container id 00:10:05.473 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:10:05.473 if mountpoint -q /etc/hostname; then 00:10:05.473 # We can assume this is a mount from a host where container is running, 00:10:05.473 # so fetch its hostname to easily identify the target swarm worker. 00:10:05.473 container="$(< /etc/hostname) ($agent)" 00:10:05.473 else 00:10:05.473 # Fallback 00:10:05.473 container=$agent 00:10:05.473 fi 00:10:05.473 fi 00:10:05.473 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:10:05.473 00:10:05.743 [Pipeline] } 00:10:05.761 [Pipeline] // withEnv 00:10:05.768 [Pipeline] setCustomBuildProperty 00:10:05.782 [Pipeline] stage 00:10:05.784 [Pipeline] { (Tests) 00:10:05.800 [Pipeline] sh 00:10:06.080 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:10:06.355 [Pipeline] timeout 00:10:06.355 Timeout set to expire in 40 min 00:10:06.357 [Pipeline] { 00:10:06.375 [Pipeline] sh 00:10:06.657 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:10:07.223 HEAD is now at c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:10:07.236 [Pipeline] sh 00:10:07.516 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:10:07.787 [Pipeline] sh 00:10:08.080 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:10:08.365 [Pipeline] sh 00:10:08.645 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:10:08.903 ++ readlink -f spdk_repo 00:10:08.903 + DIR_ROOT=/home/vagrant/spdk_repo 00:10:08.903 + [[ -n /home/vagrant/spdk_repo ]] 00:10:08.903 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:10:08.903 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:10:08.903 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:10:08.903 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:10:08.903 + [[ -d /home/vagrant/spdk_repo/output ]] 00:10:08.903 + cd /home/vagrant/spdk_repo 00:10:08.903 + source /etc/os-release 00:10:08.903 ++ NAME='Fedora Linux' 00:10:08.903 ++ VERSION='38 (Cloud Edition)' 00:10:08.903 ++ ID=fedora 00:10:08.903 ++ VERSION_ID=38 00:10:08.903 ++ VERSION_CODENAME= 00:10:08.903 ++ PLATFORM_ID=platform:f38 00:10:08.903 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:10:08.903 ++ ANSI_COLOR='0;38;2;60;110;180' 00:10:08.903 ++ LOGO=fedora-logo-icon 00:10:08.903 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:10:08.903 ++ HOME_URL=https://fedoraproject.org/ 00:10:08.903 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:10:08.903 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:10:08.903 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:10:08.903 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:10:08.903 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:10:08.903 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:10:08.903 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:10:08.903 ++ SUPPORT_END=2024-05-14 00:10:08.903 ++ VARIANT='Cloud Edition' 00:10:08.903 ++ VARIANT_ID=cloud 00:10:08.903 + uname -a 00:10:08.903 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:10:08.903 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:09.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.470 Hugepages 00:10:09.470 node hugesize free / total 00:10:09.470 node0 1048576kB 0 / 0 00:10:09.470 node0 2048kB 0 / 0 00:10:09.470 00:10:09.470 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:09.470 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:09.470 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:09.470 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:09.470 + rm -f /tmp/spdk-ld-path 00:10:09.470 + source autorun-spdk.conf 00:10:09.470 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:09.470 ++ SPDK_TEST_NVMF=1 00:10:09.470 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:09.470 ++ SPDK_TEST_URING=1 00:10:09.470 ++ SPDK_TEST_USDT=1 00:10:09.470 ++ SPDK_RUN_UBSAN=1 00:10:09.470 ++ NET_TYPE=virt 00:10:09.470 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:09.470 ++ RUN_NIGHTLY=0 00:10:09.470 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:10:09.470 + [[ -n '' ]] 00:10:09.470 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:10:09.470 + for M in /var/spdk/build-*-manifest.txt 00:10:09.470 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:10:09.470 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:09.470 + for M in /var/spdk/build-*-manifest.txt 00:10:09.470 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:10:09.470 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:09.470 ++ uname 00:10:09.470 + [[ Linux == \L\i\n\u\x ]] 00:10:09.470 + sudo dmesg -T 00:10:09.470 + sudo dmesg --clear 00:10:09.728 + dmesg_pid=5097 00:10:09.728 + [[ Fedora Linux == FreeBSD ]] 00:10:09.729 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:09.729 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:09.729 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:10:09.729 + [[ -x /usr/src/fio-static/fio ]] 00:10:09.729 + sudo dmesg -Tw 00:10:09.729 + export FIO_BIN=/usr/src/fio-static/fio 00:10:09.729 + FIO_BIN=/usr/src/fio-static/fio 00:10:09.729 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:10:09.729 + [[ ! -v VFIO_QEMU_BIN ]] 00:10:09.729 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:10:09.729 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:09.729 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:09.729 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:10:09.729 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:09.729 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:09.729 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:09.729 Test configuration: 00:10:09.729 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:09.729 SPDK_TEST_NVMF=1 00:10:09.729 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:09.729 SPDK_TEST_URING=1 00:10:09.729 SPDK_TEST_USDT=1 00:10:09.729 SPDK_RUN_UBSAN=1 00:10:09.729 NET_TYPE=virt 00:10:09.729 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:09.729 RUN_NIGHTLY=0 13:48:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.729 13:48:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:09.729 13:48:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.729 13:48:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.729 13:48:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.729 13:48:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.729 13:48:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.729 13:48:08 -- paths/export.sh@5 -- $ export PATH 00:10:09.729 13:48:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.729 13:48:08 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:10:09.729 13:48:08 -- common/autobuild_common.sh@437 -- $ date +%s 00:10:09.729 13:48:08 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715780888.XXXXXX 00:10:09.729 13:48:08 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715780888.kDnzGU 00:10:09.729 13:48:08 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:10:09.729 13:48:08 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:10:09.729 13:48:08 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:10:09.729 13:48:08 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:10:09.729 13:48:08 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:10:09.729 13:48:08 -- common/autobuild_common.sh@453 -- $ get_config_params 00:10:09.729 13:48:08 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:10:09.729 13:48:08 -- common/autotest_common.sh@10 -- $ set +x 00:10:09.729 13:48:08 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:10:09.729 13:48:08 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:10:09.729 13:48:08 -- pm/common@17 -- $ local monitor 00:10:09.729 13:48:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:09.729 13:48:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:09.729 13:48:08 -- pm/common@21 -- $ date +%s 00:10:09.729 13:48:08 -- pm/common@25 -- $ sleep 1 00:10:09.729 13:48:08 -- pm/common@21 -- $ date +%s 00:10:09.729 13:48:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715780888 00:10:09.729 13:48:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715780888 00:10:09.729 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715780888_collect-vmstat.pm.log 00:10:09.729 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715780888_collect-cpu-load.pm.log 00:10:10.688 13:48:09 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:10:10.688 13:48:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:10:10.688 13:48:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:10:10.688 13:48:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:10.688 13:48:09 -- spdk/autobuild.sh@16 -- $ date -u 00:10:10.688 Wed May 15 01:48:09 PM UTC 2024 00:10:10.688 13:48:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:10:10.945 v24.05-pre-661-gc3870302f 00:10:10.946 13:48:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:10:10.946 13:48:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:10:10.946 13:48:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:10:10.946 13:48:09 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:10:10.946 13:48:09 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:10:10.946 13:48:09 -- common/autotest_common.sh@10 -- $ set +x 00:10:10.946 ************************************ 00:10:10.946 START TEST ubsan 00:10:10.946 ************************************ 00:10:10.946 using ubsan 00:10:10.946 13:48:09 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:10:10.946 00:10:10.946 real 0m0.000s 00:10:10.946 user 0m0.000s 00:10:10.946 sys 0m0.000s 00:10:10.946 13:48:09 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:10:10.946 ************************************ 00:10:10.946 END TEST ubsan 00:10:10.946 ************************************ 00:10:10.946 13:48:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:10:10.946 13:48:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:10.946 13:48:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:10.946 13:48:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:10.946 13:48:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:10:10.946 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:10.946 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:11.514 Using 'verbs' RDMA provider 00:10:27.337 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:10:45.429 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:10:45.429 Creating mk/config.mk...done. 00:10:45.429 Creating mk/cc.flags.mk...done. 00:10:45.429 Type 'make' to build. 00:10:45.429 13:48:41 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:10:45.429 13:48:41 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:10:45.429 13:48:41 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:10:45.429 13:48:41 -- common/autotest_common.sh@10 -- $ set +x 00:10:45.429 ************************************ 00:10:45.429 START TEST make 00:10:45.429 ************************************ 00:10:45.429 13:48:41 make -- common/autotest_common.sh@1121 -- $ make -j10 00:10:45.429 make[1]: Nothing to be done for 'all'. 00:10:52.016 The Meson build system 00:10:52.016 Version: 1.3.1 00:10:52.016 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:10:52.016 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:10:52.016 Build type: native build 00:10:52.016 Program cat found: YES (/usr/bin/cat) 00:10:52.016 Project name: DPDK 00:10:52.016 Project version: 23.11.0 00:10:52.016 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:52.016 C linker for the host machine: cc ld.bfd 2.39-16 00:10:52.016 Host machine cpu family: x86_64 00:10:52.016 Host machine cpu: x86_64 00:10:52.016 Message: ## Building in Developer Mode ## 00:10:52.016 Program pkg-config found: YES (/usr/bin/pkg-config) 00:10:52.016 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:10:52.016 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:10:52.016 Program python3 found: YES (/usr/bin/python3) 00:10:52.016 Program cat found: YES (/usr/bin/cat) 00:10:52.016 Compiler for C supports arguments -march=native: YES 00:10:52.016 Checking for size of "void *" : 8 00:10:52.016 Checking for size of "void *" : 8 (cached) 00:10:52.016 Library m found: YES 00:10:52.016 Library numa found: YES 00:10:52.016 Has header "numaif.h" : YES 00:10:52.016 Library fdt found: NO 00:10:52.016 Library execinfo found: NO 00:10:52.016 Has header "execinfo.h" : YES 00:10:52.016 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:52.016 Run-time dependency libarchive found: NO (tried pkgconfig) 00:10:52.016 Run-time dependency libbsd found: NO (tried pkgconfig) 00:10:52.016 Run-time dependency jansson found: NO (tried pkgconfig) 00:10:52.016 Run-time dependency openssl found: YES 3.0.9 00:10:52.016 Run-time dependency libpcap found: YES 1.10.4 00:10:52.016 Has header "pcap.h" with dependency libpcap: YES 00:10:52.016 Compiler for C supports arguments -Wcast-qual: YES 00:10:52.016 Compiler for C supports arguments -Wdeprecated: YES 00:10:52.016 Compiler for C supports arguments -Wformat: YES 00:10:52.016 Compiler for C supports arguments -Wformat-nonliteral: NO 00:10:52.016 Compiler for C supports arguments -Wformat-security: NO 00:10:52.016 Compiler for C supports arguments -Wmissing-declarations: YES 00:10:52.016 Compiler for C supports arguments -Wmissing-prototypes: YES 00:10:52.016 Compiler for C supports arguments -Wnested-externs: YES 00:10:52.016 Compiler for C supports arguments -Wold-style-definition: YES 00:10:52.016 Compiler for C supports arguments -Wpointer-arith: YES 00:10:52.016 Compiler for C supports arguments -Wsign-compare: YES 00:10:52.016 Compiler for C supports arguments -Wstrict-prototypes: YES 00:10:52.016 Compiler for C supports arguments -Wundef: YES 00:10:52.016 Compiler for C supports arguments -Wwrite-strings: YES 00:10:52.016 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:10:52.016 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:10:52.016 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:10:52.016 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:10:52.016 Program objdump found: YES (/usr/bin/objdump) 00:10:52.016 Compiler for C supports arguments -mavx512f: YES 00:10:52.016 Checking if "AVX512 checking" compiles: YES 00:10:52.016 Fetching value of define "__SSE4_2__" : 1 00:10:52.016 Fetching value of define "__AES__" : 1 00:10:52.016 Fetching value of define "__AVX__" : 1 00:10:52.016 Fetching value of define "__AVX2__" : 1 00:10:52.016 Fetching value of define "__AVX512BW__" : 1 00:10:52.016 Fetching value of define "__AVX512CD__" : 1 00:10:52.016 Fetching value of define "__AVX512DQ__" : 1 00:10:52.016 Fetching value of define "__AVX512F__" : 1 00:10:52.016 Fetching value of define "__AVX512VL__" : 1 00:10:52.016 Fetching value of define "__PCLMUL__" : 1 00:10:52.016 Fetching value of define "__RDRND__" : 1 00:10:52.016 Fetching value of define "__RDSEED__" : 1 00:10:52.016 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:10:52.016 Fetching value of define "__znver1__" : (undefined) 00:10:52.016 Fetching value of define "__znver2__" : (undefined) 00:10:52.016 Fetching value of define "__znver3__" : (undefined) 00:10:52.016 Fetching value of define "__znver4__" : (undefined) 00:10:52.016 Compiler for C supports arguments -Wno-format-truncation: YES 00:10:52.016 Message: lib/log: Defining dependency "log" 00:10:52.016 Message: lib/kvargs: Defining dependency "kvargs" 00:10:52.016 Message: lib/telemetry: Defining dependency "telemetry" 00:10:52.016 Checking for function "getentropy" : NO 00:10:52.016 Message: lib/eal: Defining dependency "eal" 00:10:52.016 Message: lib/ring: Defining dependency "ring" 00:10:52.016 Message: lib/rcu: Defining dependency "rcu" 00:10:52.016 Message: lib/mempool: Defining dependency "mempool" 00:10:52.016 Message: lib/mbuf: Defining dependency "mbuf" 00:10:52.016 Fetching value of define "__PCLMUL__" : 1 (cached) 00:10:52.016 Fetching value of define "__AVX512F__" : 1 (cached) 00:10:52.016 Fetching value of define "__AVX512BW__" : 1 (cached) 00:10:52.016 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:10:52.016 Fetching value of define "__AVX512VL__" : 1 (cached) 00:10:52.016 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:10:52.016 Compiler for C supports arguments -mpclmul: YES 00:10:52.016 Compiler for C supports arguments -maes: YES 00:10:52.016 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:52.016 Compiler for C supports arguments -mavx512bw: YES 00:10:52.016 Compiler for C supports arguments -mavx512dq: YES 00:10:52.016 Compiler for C supports arguments -mavx512vl: YES 00:10:52.016 Compiler for C supports arguments -mvpclmulqdq: YES 00:10:52.016 Compiler for C supports arguments -mavx2: YES 00:10:52.016 Compiler for C supports arguments -mavx: YES 00:10:52.016 Message: lib/net: Defining dependency "net" 00:10:52.016 Message: lib/meter: Defining dependency "meter" 00:10:52.016 Message: lib/ethdev: Defining dependency "ethdev" 00:10:52.016 Message: lib/pci: Defining dependency "pci" 00:10:52.016 Message: lib/cmdline: Defining dependency "cmdline" 00:10:52.016 Message: lib/hash: Defining dependency "hash" 00:10:52.016 Message: lib/timer: Defining dependency "timer" 00:10:52.016 Message: lib/compressdev: Defining dependency "compressdev" 00:10:52.016 Message: lib/cryptodev: Defining dependency "cryptodev" 00:10:52.016 Message: lib/dmadev: Defining dependency "dmadev" 00:10:52.016 Compiler for C supports arguments -Wno-cast-qual: YES 00:10:52.016 Message: lib/power: Defining dependency "power" 00:10:52.016 Message: lib/reorder: Defining dependency "reorder" 00:10:52.016 Message: lib/security: Defining dependency "security" 00:10:52.016 Has header "linux/userfaultfd.h" : YES 00:10:52.016 Has header "linux/vduse.h" : YES 00:10:52.016 Message: lib/vhost: Defining dependency "vhost" 00:10:52.016 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:10:52.016 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:10:52.016 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:10:52.016 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:10:52.016 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:10:52.016 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:10:52.016 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:10:52.016 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:10:52.016 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:10:52.016 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:10:52.016 Program doxygen found: YES (/usr/bin/doxygen) 00:10:52.016 Configuring doxy-api-html.conf using configuration 00:10:52.016 Configuring doxy-api-man.conf using configuration 00:10:52.016 Program mandb found: YES (/usr/bin/mandb) 00:10:52.016 Program sphinx-build found: NO 00:10:52.016 Configuring rte_build_config.h using configuration 00:10:52.016 Message: 00:10:52.016 ================= 00:10:52.016 Applications Enabled 00:10:52.016 ================= 00:10:52.016 00:10:52.016 apps: 00:10:52.016 00:10:52.016 00:10:52.016 Message: 00:10:52.016 ================= 00:10:52.016 Libraries Enabled 00:10:52.016 ================= 00:10:52.016 00:10:52.016 libs: 00:10:52.016 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:10:52.016 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:10:52.016 cryptodev, dmadev, power, reorder, security, vhost, 00:10:52.016 00:10:52.016 Message: 00:10:52.016 =============== 00:10:52.016 Drivers Enabled 00:10:52.016 =============== 00:10:52.016 00:10:52.016 common: 00:10:52.016 00:10:52.016 bus: 00:10:52.016 pci, vdev, 00:10:52.016 mempool: 00:10:52.016 ring, 00:10:52.016 dma: 00:10:52.016 00:10:52.016 net: 00:10:52.016 00:10:52.016 crypto: 00:10:52.016 00:10:52.016 compress: 00:10:52.016 00:10:52.016 vdpa: 00:10:52.016 00:10:52.016 00:10:52.016 Message: 00:10:52.016 ================= 00:10:52.016 Content Skipped 00:10:52.016 ================= 00:10:52.016 00:10:52.016 apps: 00:10:52.016 dumpcap: explicitly disabled via build config 00:10:52.016 graph: explicitly disabled via build config 00:10:52.016 pdump: explicitly disabled via build config 00:10:52.016 proc-info: explicitly disabled via build config 00:10:52.016 test-acl: explicitly disabled via build config 00:10:52.016 test-bbdev: explicitly disabled via build config 00:10:52.016 test-cmdline: explicitly disabled via build config 00:10:52.016 test-compress-perf: explicitly disabled via build config 00:10:52.016 test-crypto-perf: explicitly disabled via build config 00:10:52.016 test-dma-perf: explicitly disabled via build config 00:10:52.016 test-eventdev: explicitly disabled via build config 00:10:52.016 test-fib: explicitly disabled via build config 00:10:52.016 test-flow-perf: explicitly disabled via build config 00:10:52.016 test-gpudev: explicitly disabled via build config 00:10:52.016 test-mldev: explicitly disabled via build config 00:10:52.017 test-pipeline: explicitly disabled via build config 00:10:52.017 test-pmd: explicitly disabled via build config 00:10:52.017 test-regex: explicitly disabled via build config 00:10:52.017 test-sad: explicitly disabled via build config 00:10:52.017 test-security-perf: explicitly disabled via build config 00:10:52.017 00:10:52.017 libs: 00:10:52.017 metrics: explicitly disabled via build config 00:10:52.017 acl: explicitly disabled via build config 00:10:52.017 bbdev: explicitly disabled via build config 00:10:52.017 bitratestats: explicitly disabled via build config 00:10:52.017 bpf: explicitly disabled via build config 00:10:52.017 cfgfile: explicitly disabled via build config 00:10:52.017 distributor: explicitly disabled via build config 00:10:52.017 efd: explicitly disabled via build config 00:10:52.017 eventdev: explicitly disabled via build config 00:10:52.017 dispatcher: explicitly disabled via build config 00:10:52.017 gpudev: explicitly disabled via build config 00:10:52.017 gro: explicitly disabled via build config 00:10:52.017 gso: explicitly disabled via build config 00:10:52.017 ip_frag: explicitly disabled via build config 00:10:52.017 jobstats: explicitly disabled via build config 00:10:52.017 latencystats: explicitly disabled via build config 00:10:52.017 lpm: explicitly disabled via build config 00:10:52.017 member: explicitly disabled via build config 00:10:52.017 pcapng: explicitly disabled via build config 00:10:52.017 rawdev: explicitly disabled via build config 00:10:52.017 regexdev: explicitly disabled via build config 00:10:52.017 mldev: explicitly disabled via build config 00:10:52.017 rib: explicitly disabled via build config 00:10:52.017 sched: explicitly disabled via build config 00:10:52.017 stack: explicitly disabled via build config 00:10:52.017 ipsec: explicitly disabled via build config 00:10:52.017 pdcp: explicitly disabled via build config 00:10:52.017 fib: explicitly disabled via build config 00:10:52.017 port: explicitly disabled via build config 00:10:52.017 pdump: explicitly disabled via build config 00:10:52.017 table: explicitly disabled via build config 00:10:52.017 pipeline: explicitly disabled via build config 00:10:52.017 graph: explicitly disabled via build config 00:10:52.017 node: explicitly disabled via build config 00:10:52.017 00:10:52.017 drivers: 00:10:52.017 common/cpt: not in enabled drivers build config 00:10:52.017 common/dpaax: not in enabled drivers build config 00:10:52.017 common/iavf: not in enabled drivers build config 00:10:52.017 common/idpf: not in enabled drivers build config 00:10:52.017 common/mvep: not in enabled drivers build config 00:10:52.017 common/octeontx: not in enabled drivers build config 00:10:52.017 bus/auxiliary: not in enabled drivers build config 00:10:52.017 bus/cdx: not in enabled drivers build config 00:10:52.017 bus/dpaa: not in enabled drivers build config 00:10:52.017 bus/fslmc: not in enabled drivers build config 00:10:52.017 bus/ifpga: not in enabled drivers build config 00:10:52.017 bus/platform: not in enabled drivers build config 00:10:52.017 bus/vmbus: not in enabled drivers build config 00:10:52.017 common/cnxk: not in enabled drivers build config 00:10:52.017 common/mlx5: not in enabled drivers build config 00:10:52.017 common/nfp: not in enabled drivers build config 00:10:52.017 common/qat: not in enabled drivers build config 00:10:52.017 common/sfc_efx: not in enabled drivers build config 00:10:52.017 mempool/bucket: not in enabled drivers build config 00:10:52.017 mempool/cnxk: not in enabled drivers build config 00:10:52.017 mempool/dpaa: not in enabled drivers build config 00:10:52.017 mempool/dpaa2: not in enabled drivers build config 00:10:52.017 mempool/octeontx: not in enabled drivers build config 00:10:52.017 mempool/stack: not in enabled drivers build config 00:10:52.017 dma/cnxk: not in enabled drivers build config 00:10:52.017 dma/dpaa: not in enabled drivers build config 00:10:52.017 dma/dpaa2: not in enabled drivers build config 00:10:52.017 dma/hisilicon: not in enabled drivers build config 00:10:52.017 dma/idxd: not in enabled drivers build config 00:10:52.017 dma/ioat: not in enabled drivers build config 00:10:52.017 dma/skeleton: not in enabled drivers build config 00:10:52.017 net/af_packet: not in enabled drivers build config 00:10:52.017 net/af_xdp: not in enabled drivers build config 00:10:52.017 net/ark: not in enabled drivers build config 00:10:52.017 net/atlantic: not in enabled drivers build config 00:10:52.017 net/avp: not in enabled drivers build config 00:10:52.017 net/axgbe: not in enabled drivers build config 00:10:52.017 net/bnx2x: not in enabled drivers build config 00:10:52.017 net/bnxt: not in enabled drivers build config 00:10:52.017 net/bonding: not in enabled drivers build config 00:10:52.017 net/cnxk: not in enabled drivers build config 00:10:52.017 net/cpfl: not in enabled drivers build config 00:10:52.017 net/cxgbe: not in enabled drivers build config 00:10:52.017 net/dpaa: not in enabled drivers build config 00:10:52.017 net/dpaa2: not in enabled drivers build config 00:10:52.017 net/e1000: not in enabled drivers build config 00:10:52.017 net/ena: not in enabled drivers build config 00:10:52.017 net/enetc: not in enabled drivers build config 00:10:52.017 net/enetfec: not in enabled drivers build config 00:10:52.017 net/enic: not in enabled drivers build config 00:10:52.017 net/failsafe: not in enabled drivers build config 00:10:52.017 net/fm10k: not in enabled drivers build config 00:10:52.017 net/gve: not in enabled drivers build config 00:10:52.017 net/hinic: not in enabled drivers build config 00:10:52.017 net/hns3: not in enabled drivers build config 00:10:52.017 net/i40e: not in enabled drivers build config 00:10:52.017 net/iavf: not in enabled drivers build config 00:10:52.017 net/ice: not in enabled drivers build config 00:10:52.017 net/idpf: not in enabled drivers build config 00:10:52.017 net/igc: not in enabled drivers build config 00:10:52.017 net/ionic: not in enabled drivers build config 00:10:52.017 net/ipn3ke: not in enabled drivers build config 00:10:52.017 net/ixgbe: not in enabled drivers build config 00:10:52.017 net/mana: not in enabled drivers build config 00:10:52.017 net/memif: not in enabled drivers build config 00:10:52.017 net/mlx4: not in enabled drivers build config 00:10:52.017 net/mlx5: not in enabled drivers build config 00:10:52.017 net/mvneta: not in enabled drivers build config 00:10:52.017 net/mvpp2: not in enabled drivers build config 00:10:52.017 net/netvsc: not in enabled drivers build config 00:10:52.017 net/nfb: not in enabled drivers build config 00:10:52.017 net/nfp: not in enabled drivers build config 00:10:52.017 net/ngbe: not in enabled drivers build config 00:10:52.017 net/null: not in enabled drivers build config 00:10:52.017 net/octeontx: not in enabled drivers build config 00:10:52.017 net/octeon_ep: not in enabled drivers build config 00:10:52.017 net/pcap: not in enabled drivers build config 00:10:52.017 net/pfe: not in enabled drivers build config 00:10:52.017 net/qede: not in enabled drivers build config 00:10:52.017 net/ring: not in enabled drivers build config 00:10:52.017 net/sfc: not in enabled drivers build config 00:10:52.017 net/softnic: not in enabled drivers build config 00:10:52.017 net/tap: not in enabled drivers build config 00:10:52.017 net/thunderx: not in enabled drivers build config 00:10:52.017 net/txgbe: not in enabled drivers build config 00:10:52.017 net/vdev_netvsc: not in enabled drivers build config 00:10:52.017 net/vhost: not in enabled drivers build config 00:10:52.017 net/virtio: not in enabled drivers build config 00:10:52.017 net/vmxnet3: not in enabled drivers build config 00:10:52.017 raw/*: missing internal dependency, "rawdev" 00:10:52.017 crypto/armv8: not in enabled drivers build config 00:10:52.017 crypto/bcmfs: not in enabled drivers build config 00:10:52.017 crypto/caam_jr: not in enabled drivers build config 00:10:52.017 crypto/ccp: not in enabled drivers build config 00:10:52.017 crypto/cnxk: not in enabled drivers build config 00:10:52.017 crypto/dpaa_sec: not in enabled drivers build config 00:10:52.017 crypto/dpaa2_sec: not in enabled drivers build config 00:10:52.017 crypto/ipsec_mb: not in enabled drivers build config 00:10:52.017 crypto/mlx5: not in enabled drivers build config 00:10:52.017 crypto/mvsam: not in enabled drivers build config 00:10:52.017 crypto/nitrox: not in enabled drivers build config 00:10:52.017 crypto/null: not in enabled drivers build config 00:10:52.017 crypto/octeontx: not in enabled drivers build config 00:10:52.017 crypto/openssl: not in enabled drivers build config 00:10:52.017 crypto/scheduler: not in enabled drivers build config 00:10:52.017 crypto/uadk: not in enabled drivers build config 00:10:52.017 crypto/virtio: not in enabled drivers build config 00:10:52.017 compress/isal: not in enabled drivers build config 00:10:52.017 compress/mlx5: not in enabled drivers build config 00:10:52.017 compress/octeontx: not in enabled drivers build config 00:10:52.017 compress/zlib: not in enabled drivers build config 00:10:52.017 regex/*: missing internal dependency, "regexdev" 00:10:52.017 ml/*: missing internal dependency, "mldev" 00:10:52.017 vdpa/ifc: not in enabled drivers build config 00:10:52.017 vdpa/mlx5: not in enabled drivers build config 00:10:52.017 vdpa/nfp: not in enabled drivers build config 00:10:52.017 vdpa/sfc: not in enabled drivers build config 00:10:52.017 event/*: missing internal dependency, "eventdev" 00:10:52.017 baseband/*: missing internal dependency, "bbdev" 00:10:52.017 gpu/*: missing internal dependency, "gpudev" 00:10:52.017 00:10:52.017 00:10:52.017 Build targets in project: 85 00:10:52.017 00:10:52.017 DPDK 23.11.0 00:10:52.017 00:10:52.017 User defined options 00:10:52.017 buildtype : debug 00:10:52.017 default_library : shared 00:10:52.017 libdir : lib 00:10:52.017 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:52.017 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:10:52.017 c_link_args : 00:10:52.017 cpu_instruction_set: native 00:10:52.017 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:10:52.017 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:10:52.017 enable_docs : false 00:10:52.017 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:10:52.017 enable_kmods : false 00:10:52.017 tests : false 00:10:52.017 00:10:52.017 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:52.017 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:10:52.017 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:10:52.017 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:52.018 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:52.018 [4/265] Linking static target lib/librte_kvargs.a 00:10:52.018 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:10:52.276 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:52.276 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:52.276 [8/265] Linking static target lib/librte_log.a 00:10:52.276 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:52.276 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:52.536 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:52.536 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:52.536 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:52.536 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:52.536 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:52.536 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:52.795 [17/265] Linking static target lib/librte_telemetry.a 00:10:52.795 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:52.795 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:53.054 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:53.054 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:10:53.054 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:53.054 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:53.054 [24/265] Linking target lib/librte_log.so.24.0 00:10:53.054 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:53.054 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:53.312 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:53.312 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:53.312 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:53.312 [30/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:10:53.312 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:53.312 [32/265] Linking target lib/librte_kvargs.so.24.0 00:10:53.570 [33/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:10:53.570 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:53.570 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:53.570 [36/265] Linking target lib/librte_telemetry.so.24.0 00:10:53.570 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:53.570 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:53.570 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:53.570 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:53.570 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:53.830 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:53.830 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:53.830 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:53.830 [45/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:10:53.830 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:54.088 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:54.088 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:54.088 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:54.088 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:54.088 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:54.088 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:54.347 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:54.347 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:54.347 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:54.347 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:54.347 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:54.347 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:54.606 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:54.606 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:54.606 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:54.606 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:54.606 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:54.606 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:54.865 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:54.865 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:54.865 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:54.865 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:55.123 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:55.123 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:55.123 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:55.123 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:55.123 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:55.123 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:55.123 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:55.123 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:55.123 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:55.123 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:55.385 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:55.385 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:55.385 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:55.665 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:55.665 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:55.665 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:55.665 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:55.665 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:55.665 [87/265] Linking static target lib/librte_eal.a 00:10:55.923 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:55.923 [89/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:55.923 [90/265] Linking static target lib/librte_ring.a 00:10:55.923 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:55.923 [92/265] Linking static target lib/librte_rcu.a 00:10:55.923 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:56.183 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:56.183 [95/265] Linking static target lib/librte_mempool.a 00:10:56.183 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:56.183 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:56.441 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:56.441 [99/265] Linking static target lib/librte_mbuf.a 00:10:56.441 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:56.441 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:56.441 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:56.441 [103/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:56.441 [104/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:56.701 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:56.701 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:56.701 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:56.701 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:56.701 [109/265] Linking static target lib/librte_net.a 00:10:56.963 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:56.963 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:56.963 [112/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:56.963 [113/265] Linking static target lib/librte_meter.a 00:10:57.224 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:57.224 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.224 [116/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.487 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:57.487 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.487 [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:57.754 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:58.023 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:10:58.023 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:58.023 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:58.023 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:58.023 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:58.023 [126/265] Linking static target lib/librte_pci.a 00:10:58.023 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:58.292 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:58.292 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:58.292 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:58.292 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:58.292 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:58.563 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:58.564 [134/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:58.564 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:58.564 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:58.564 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:58.564 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:58.564 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:58.564 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:58.564 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:58.564 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:58.825 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:58.825 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:58.825 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:59.082 [146/265] Linking static target lib/librte_cmdline.a 00:10:59.082 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:59.340 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:59.340 [149/265] Linking static target lib/librte_timer.a 00:10:59.340 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:59.340 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:59.340 [152/265] Linking static target lib/librte_ethdev.a 00:10:59.340 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:59.340 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:59.597 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:59.597 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:59.597 [157/265] Linking static target lib/librte_compressdev.a 00:10:59.597 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:59.597 [159/265] Linking static target lib/librte_hash.a 00:10:59.855 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.114 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:00.114 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:00.114 [163/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:00.114 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:00.114 [165/265] Linking static target lib/librte_dmadev.a 00:11:00.114 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:00.373 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:00.373 [168/265] Linking static target lib/librte_cryptodev.a 00:11:00.373 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:00.373 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:00.373 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:00.373 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.373 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:00.631 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.631 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.631 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:00.889 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:00.889 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:00.889 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:00.889 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:00.889 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:01.148 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:01.148 [183/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:01.148 [184/265] Linking static target lib/librte_reorder.a 00:11:01.148 [185/265] Linking static target lib/librte_power.a 00:11:01.148 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:01.148 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:01.148 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:01.148 [189/265] Linking static target lib/librte_security.a 00:11:01.406 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:01.664 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:01.664 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:01.922 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:01.922 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:02.181 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:02.181 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:02.181 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:02.181 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:02.440 [199/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:02.440 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:02.440 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:02.699 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:02.699 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:02.699 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:02.699 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:02.699 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:02.699 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:02.699 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:02.699 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:02.957 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:02.957 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:02.957 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:02.957 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:02.957 [214/265] Linking static target drivers/librte_bus_pci.a 00:11:02.957 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:02.957 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:02.957 [217/265] Linking static target drivers/librte_bus_vdev.a 00:11:02.957 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:02.957 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:03.216 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:03.216 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:03.216 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:03.216 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:03.216 [224/265] Linking static target drivers/librte_mempool_ring.a 00:11:03.477 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:03.748 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:03.748 [227/265] Linking static target lib/librte_vhost.a 00:11:06.281 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:11:08.817 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:08.817 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:08.818 [231/265] Linking target lib/librte_eal.so.24.0 00:11:08.818 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:11:08.818 [233/265] Linking target lib/librte_ring.so.24.0 00:11:08.818 [234/265] Linking target lib/librte_meter.so.24.0 00:11:08.818 [235/265] Linking target lib/librte_timer.so.24.0 00:11:08.818 [236/265] Linking target lib/librte_dmadev.so.24.0 00:11:08.818 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:11:08.818 [238/265] Linking target lib/librte_pci.so.24.0 00:11:08.818 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:11:08.818 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:11:08.818 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:11:08.818 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:11:08.818 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:11:08.818 [244/265] Linking target lib/librte_rcu.so.24.0 00:11:08.818 [245/265] Linking target lib/librte_mempool.so.24.0 00:11:08.818 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:11:09.077 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:11:09.077 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:11:09.077 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:11:09.077 [250/265] Linking target lib/librte_mbuf.so.24.0 00:11:09.336 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:11:09.336 [252/265] Linking target lib/librte_compressdev.so.24.0 00:11:09.336 [253/265] Linking target lib/librte_reorder.so.24.0 00:11:09.336 [254/265] Linking target lib/librte_net.so.24.0 00:11:09.336 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:11:09.595 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:11:09.595 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:11:09.595 [258/265] Linking target lib/librte_hash.so.24.0 00:11:09.595 [259/265] Linking target lib/librte_security.so.24.0 00:11:09.595 [260/265] Linking target lib/librte_cmdline.so.24.0 00:11:09.595 [261/265] Linking target lib/librte_ethdev.so.24.0 00:11:09.595 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:11:09.595 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:11:09.854 [264/265] Linking target lib/librte_power.so.24.0 00:11:09.854 [265/265] Linking target lib/librte_vhost.so.24.0 00:11:09.854 INFO: autodetecting backend as ninja 00:11:09.854 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:11:11.233 CC lib/log/log.o 00:11:11.233 CC lib/log/log_flags.o 00:11:11.233 CC lib/ut/ut.o 00:11:11.233 CC lib/log/log_deprecated.o 00:11:11.233 CC lib/ut_mock/mock.o 00:11:11.233 LIB libspdk_ut_mock.a 00:11:11.233 LIB libspdk_ut.a 00:11:11.233 LIB libspdk_log.a 00:11:11.233 SO libspdk_ut_mock.so.6.0 00:11:11.233 SO libspdk_ut.so.2.0 00:11:11.233 SO libspdk_log.so.7.0 00:11:11.233 SYMLINK libspdk_ut_mock.so 00:11:11.233 SYMLINK libspdk_ut.so 00:11:11.233 SYMLINK libspdk_log.so 00:11:11.491 CC lib/ioat/ioat.o 00:11:11.491 CC lib/dma/dma.o 00:11:11.491 CC lib/util/base64.o 00:11:11.491 CC lib/util/bit_array.o 00:11:11.491 CC lib/util/crc32c.o 00:11:11.491 CC lib/util/cpuset.o 00:11:11.491 CC lib/util/crc16.o 00:11:11.491 CC lib/util/crc32.o 00:11:11.491 CXX lib/trace_parser/trace.o 00:11:11.803 CC lib/vfio_user/host/vfio_user_pci.o 00:11:11.803 CC lib/util/crc32_ieee.o 00:11:11.803 CC lib/util/crc64.o 00:11:11.803 CC lib/vfio_user/host/vfio_user.o 00:11:11.803 LIB libspdk_dma.a 00:11:11.803 CC lib/util/dif.o 00:11:11.803 CC lib/util/fd.o 00:11:11.803 CC lib/util/file.o 00:11:11.803 LIB libspdk_ioat.a 00:11:11.803 SO libspdk_dma.so.4.0 00:11:11.803 CC lib/util/hexlify.o 00:11:11.803 SO libspdk_ioat.so.7.0 00:11:11.803 CC lib/util/iov.o 00:11:11.803 SYMLINK libspdk_dma.so 00:11:11.803 CC lib/util/math.o 00:11:11.803 SYMLINK libspdk_ioat.so 00:11:11.803 CC lib/util/pipe.o 00:11:11.803 CC lib/util/strerror_tls.o 00:11:11.803 CC lib/util/string.o 00:11:11.803 CC lib/util/uuid.o 00:11:11.803 LIB libspdk_vfio_user.a 00:11:12.061 CC lib/util/fd_group.o 00:11:12.061 SO libspdk_vfio_user.so.5.0 00:11:12.061 CC lib/util/xor.o 00:11:12.061 SYMLINK libspdk_vfio_user.so 00:11:12.061 CC lib/util/zipf.o 00:11:12.319 LIB libspdk_util.a 00:11:12.319 SO libspdk_util.so.9.0 00:11:12.319 LIB libspdk_trace_parser.a 00:11:12.577 SO libspdk_trace_parser.so.5.0 00:11:12.577 SYMLINK libspdk_util.so 00:11:12.577 SYMLINK libspdk_trace_parser.so 00:11:12.835 CC lib/idxd/idxd.o 00:11:12.835 CC lib/idxd/idxd_user.o 00:11:12.835 CC lib/json/json_parse.o 00:11:12.835 CC lib/conf/conf.o 00:11:12.835 CC lib/json/json_write.o 00:11:12.835 CC lib/json/json_util.o 00:11:12.835 CC lib/vmd/vmd.o 00:11:12.835 CC lib/rdma/common.o 00:11:12.835 CC lib/vmd/led.o 00:11:12.835 CC lib/env_dpdk/env.o 00:11:12.835 CC lib/env_dpdk/memory.o 00:11:12.835 LIB libspdk_conf.a 00:11:12.835 CC lib/env_dpdk/pci.o 00:11:13.093 CC lib/rdma/rdma_verbs.o 00:11:13.093 SO libspdk_conf.so.6.0 00:11:13.093 CC lib/env_dpdk/init.o 00:11:13.093 SYMLINK libspdk_conf.so 00:11:13.093 CC lib/env_dpdk/threads.o 00:11:13.093 CC lib/env_dpdk/pci_ioat.o 00:11:13.093 LIB libspdk_json.a 00:11:13.093 SO libspdk_json.so.6.0 00:11:13.093 SYMLINK libspdk_json.so 00:11:13.093 CC lib/env_dpdk/pci_virtio.o 00:11:13.093 CC lib/env_dpdk/pci_vmd.o 00:11:13.093 CC lib/env_dpdk/pci_idxd.o 00:11:13.093 LIB libspdk_rdma.a 00:11:13.093 LIB libspdk_idxd.a 00:11:13.093 SO libspdk_rdma.so.6.0 00:11:13.093 SO libspdk_idxd.so.12.0 00:11:13.351 CC lib/env_dpdk/pci_event.o 00:11:13.351 CC lib/env_dpdk/sigbus_handler.o 00:11:13.351 SYMLINK libspdk_idxd.so 00:11:13.351 CC lib/env_dpdk/pci_dpdk.o 00:11:13.351 SYMLINK libspdk_rdma.so 00:11:13.351 CC lib/env_dpdk/pci_dpdk_2207.o 00:11:13.351 CC lib/env_dpdk/pci_dpdk_2211.o 00:11:13.351 LIB libspdk_vmd.a 00:11:13.351 SO libspdk_vmd.so.6.0 00:11:13.351 SYMLINK libspdk_vmd.so 00:11:13.351 CC lib/jsonrpc/jsonrpc_server.o 00:11:13.351 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:11:13.351 CC lib/jsonrpc/jsonrpc_client.o 00:11:13.351 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:11:13.609 LIB libspdk_jsonrpc.a 00:11:13.609 SO libspdk_jsonrpc.so.6.0 00:11:13.867 SYMLINK libspdk_jsonrpc.so 00:11:13.867 LIB libspdk_env_dpdk.a 00:11:14.126 SO libspdk_env_dpdk.so.14.0 00:11:14.126 CC lib/rpc/rpc.o 00:11:14.126 SYMLINK libspdk_env_dpdk.so 00:11:14.385 LIB libspdk_rpc.a 00:11:14.385 SO libspdk_rpc.so.6.0 00:11:14.385 SYMLINK libspdk_rpc.so 00:11:14.952 CC lib/notify/notify_rpc.o 00:11:14.952 CC lib/notify/notify.o 00:11:14.952 CC lib/trace/trace.o 00:11:14.952 CC lib/trace/trace_flags.o 00:11:14.952 CC lib/trace/trace_rpc.o 00:11:14.952 CC lib/keyring/keyring.o 00:11:14.952 CC lib/keyring/keyring_rpc.o 00:11:14.952 LIB libspdk_notify.a 00:11:15.210 LIB libspdk_trace.a 00:11:15.210 SO libspdk_notify.so.6.0 00:11:15.210 LIB libspdk_keyring.a 00:11:15.210 SO libspdk_trace.so.10.0 00:11:15.210 SO libspdk_keyring.so.1.0 00:11:15.210 SYMLINK libspdk_notify.so 00:11:15.210 SYMLINK libspdk_trace.so 00:11:15.210 SYMLINK libspdk_keyring.so 00:11:15.775 CC lib/thread/iobuf.o 00:11:15.775 CC lib/thread/thread.o 00:11:15.775 CC lib/sock/sock.o 00:11:15.775 CC lib/sock/sock_rpc.o 00:11:16.031 LIB libspdk_sock.a 00:11:16.031 SO libspdk_sock.so.9.0 00:11:16.031 SYMLINK libspdk_sock.so 00:11:16.623 CC lib/nvme/nvme_ctrlr_cmd.o 00:11:16.623 CC lib/nvme/nvme_ctrlr.o 00:11:16.623 CC lib/nvme/nvme_fabric.o 00:11:16.623 CC lib/nvme/nvme_ns_cmd.o 00:11:16.623 CC lib/nvme/nvme_pcie_common.o 00:11:16.623 CC lib/nvme/nvme_ns.o 00:11:16.623 CC lib/nvme/nvme_qpair.o 00:11:16.623 CC lib/nvme/nvme_pcie.o 00:11:16.623 CC lib/nvme/nvme.o 00:11:16.881 LIB libspdk_thread.a 00:11:16.881 SO libspdk_thread.so.10.0 00:11:17.138 SYMLINK libspdk_thread.so 00:11:17.138 CC lib/nvme/nvme_quirks.o 00:11:17.138 CC lib/nvme/nvme_transport.o 00:11:17.138 CC lib/nvme/nvme_discovery.o 00:11:17.138 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:11:17.395 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:11:17.395 CC lib/nvme/nvme_tcp.o 00:11:17.395 CC lib/nvme/nvme_opal.o 00:11:17.395 CC lib/nvme/nvme_io_msg.o 00:11:17.395 CC lib/nvme/nvme_poll_group.o 00:11:17.652 CC lib/nvme/nvme_zns.o 00:11:17.652 CC lib/nvme/nvme_stubs.o 00:11:17.909 CC lib/nvme/nvme_auth.o 00:11:17.909 CC lib/nvme/nvme_cuse.o 00:11:17.909 CC lib/accel/accel.o 00:11:17.909 CC lib/blob/blobstore.o 00:11:17.909 CC lib/init/json_config.o 00:11:17.909 CC lib/blob/request.o 00:11:17.909 CC lib/blob/zeroes.o 00:11:18.181 CC lib/accel/accel_rpc.o 00:11:18.181 CC lib/init/subsystem.o 00:11:18.181 CC lib/init/subsystem_rpc.o 00:11:18.181 CC lib/nvme/nvme_rdma.o 00:11:18.438 CC lib/init/rpc.o 00:11:18.438 CC lib/accel/accel_sw.o 00:11:18.438 CC lib/blob/blob_bs_dev.o 00:11:18.438 CC lib/virtio/virtio.o 00:11:18.438 LIB libspdk_init.a 00:11:18.696 CC lib/virtio/virtio_vhost_user.o 00:11:18.696 SO libspdk_init.so.5.0 00:11:18.696 CC lib/virtio/virtio_vfio_user.o 00:11:18.696 CC lib/virtio/virtio_pci.o 00:11:18.696 SYMLINK libspdk_init.so 00:11:18.696 LIB libspdk_accel.a 00:11:18.954 SO libspdk_accel.so.15.0 00:11:18.954 SYMLINK libspdk_accel.so 00:11:18.954 LIB libspdk_virtio.a 00:11:18.954 SO libspdk_virtio.so.7.0 00:11:18.954 CC lib/event/log_rpc.o 00:11:18.954 CC lib/event/app.o 00:11:18.954 CC lib/event/app_rpc.o 00:11:18.954 CC lib/event/reactor.o 00:11:18.954 CC lib/event/scheduler_static.o 00:11:19.211 SYMLINK libspdk_virtio.so 00:11:19.212 CC lib/bdev/bdev.o 00:11:19.212 CC lib/bdev/bdev_zone.o 00:11:19.212 CC lib/bdev/bdev_rpc.o 00:11:19.212 CC lib/bdev/part.o 00:11:19.212 CC lib/bdev/scsi_nvme.o 00:11:19.469 LIB libspdk_event.a 00:11:19.469 LIB libspdk_nvme.a 00:11:19.469 SO libspdk_event.so.13.0 00:11:19.469 SYMLINK libspdk_event.so 00:11:19.469 SO libspdk_nvme.so.13.0 00:11:19.727 SYMLINK libspdk_nvme.so 00:11:20.664 LIB libspdk_blob.a 00:11:20.664 SO libspdk_blob.so.11.0 00:11:20.664 SYMLINK libspdk_blob.so 00:11:20.922 CC lib/blobfs/blobfs.o 00:11:20.922 CC lib/blobfs/tree.o 00:11:20.922 CC lib/lvol/lvol.o 00:11:21.497 LIB libspdk_bdev.a 00:11:21.497 SO libspdk_bdev.so.15.0 00:11:21.754 SYMLINK libspdk_bdev.so 00:11:21.754 LIB libspdk_blobfs.a 00:11:21.754 SO libspdk_blobfs.so.10.0 00:11:21.754 LIB libspdk_lvol.a 00:11:21.754 SO libspdk_lvol.so.10.0 00:11:21.754 CC lib/scsi/dev.o 00:11:21.754 CC lib/scsi/lun.o 00:11:21.754 SYMLINK libspdk_blobfs.so 00:11:21.754 CC lib/nbd/nbd.o 00:11:21.754 CC lib/scsi/port.o 00:11:21.754 CC lib/scsi/scsi.o 00:11:21.754 CC lib/ftl/ftl_core.o 00:11:21.754 CC lib/nbd/nbd_rpc.o 00:11:21.754 CC lib/nvmf/ctrlr.o 00:11:21.754 CC lib/ublk/ublk.o 00:11:21.754 SYMLINK libspdk_lvol.so 00:11:21.754 CC lib/scsi/scsi_bdev.o 00:11:22.011 CC lib/scsi/scsi_pr.o 00:11:22.011 CC lib/scsi/scsi_rpc.o 00:11:22.011 CC lib/scsi/task.o 00:11:22.011 CC lib/nvmf/ctrlr_discovery.o 00:11:22.011 CC lib/nvmf/ctrlr_bdev.o 00:11:22.011 CC lib/nvmf/subsystem.o 00:11:22.268 CC lib/ftl/ftl_init.o 00:11:22.268 LIB libspdk_nbd.a 00:11:22.268 CC lib/nvmf/nvmf.o 00:11:22.268 SO libspdk_nbd.so.7.0 00:11:22.268 CC lib/nvmf/nvmf_rpc.o 00:11:22.268 SYMLINK libspdk_nbd.so 00:11:22.268 CC lib/nvmf/transport.o 00:11:22.268 LIB libspdk_scsi.a 00:11:22.268 CC lib/ftl/ftl_layout.o 00:11:22.525 SO libspdk_scsi.so.9.0 00:11:22.525 CC lib/ublk/ublk_rpc.o 00:11:22.525 CC lib/nvmf/tcp.o 00:11:22.525 SYMLINK libspdk_scsi.so 00:11:22.525 CC lib/nvmf/stubs.o 00:11:22.525 LIB libspdk_ublk.a 00:11:22.525 SO libspdk_ublk.so.3.0 00:11:22.525 CC lib/ftl/ftl_debug.o 00:11:22.783 SYMLINK libspdk_ublk.so 00:11:22.783 CC lib/ftl/ftl_io.o 00:11:22.783 CC lib/iscsi/conn.o 00:11:22.783 CC lib/nvmf/mdns_server.o 00:11:22.783 CC lib/nvmf/rdma.o 00:11:23.043 CC lib/vhost/vhost.o 00:11:23.043 CC lib/vhost/vhost_rpc.o 00:11:23.043 CC lib/iscsi/init_grp.o 00:11:23.043 CC lib/ftl/ftl_sb.o 00:11:23.302 CC lib/nvmf/auth.o 00:11:23.302 CC lib/vhost/vhost_scsi.o 00:11:23.302 CC lib/ftl/ftl_l2p.o 00:11:23.302 CC lib/vhost/vhost_blk.o 00:11:23.302 CC lib/vhost/rte_vhost_user.o 00:11:23.302 CC lib/iscsi/iscsi.o 00:11:23.302 CC lib/ftl/ftl_l2p_flat.o 00:11:23.560 CC lib/iscsi/md5.o 00:11:23.560 CC lib/iscsi/param.o 00:11:23.560 CC lib/ftl/ftl_nv_cache.o 00:11:23.560 CC lib/iscsi/portal_grp.o 00:11:23.819 CC lib/iscsi/tgt_node.o 00:11:23.819 CC lib/iscsi/iscsi_subsystem.o 00:11:23.819 CC lib/iscsi/iscsi_rpc.o 00:11:23.819 CC lib/ftl/ftl_band.o 00:11:24.078 CC lib/iscsi/task.o 00:11:24.078 CC lib/ftl/ftl_band_ops.o 00:11:24.078 CC lib/ftl/ftl_writer.o 00:11:24.078 CC lib/ftl/ftl_rq.o 00:11:24.078 CC lib/ftl/ftl_reloc.o 00:11:24.078 LIB libspdk_vhost.a 00:11:24.078 CC lib/ftl/ftl_l2p_cache.o 00:11:24.078 CC lib/ftl/ftl_p2l.o 00:11:24.336 SO libspdk_vhost.so.8.0 00:11:24.336 CC lib/ftl/mngt/ftl_mngt.o 00:11:24.336 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:24.336 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:24.336 SYMLINK libspdk_vhost.so 00:11:24.336 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:24.336 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:24.336 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:24.595 LIB libspdk_iscsi.a 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:24.595 LIB libspdk_nvmf.a 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:24.595 SO libspdk_iscsi.so.8.0 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:24.595 SO libspdk_nvmf.so.18.0 00:11:24.595 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:24.595 CC lib/ftl/utils/ftl_conf.o 00:11:24.595 CC lib/ftl/utils/ftl_md.o 00:11:24.595 CC lib/ftl/utils/ftl_mempool.o 00:11:24.595 CC lib/ftl/utils/ftl_bitmap.o 00:11:24.854 SYMLINK libspdk_iscsi.so 00:11:24.854 CC lib/ftl/utils/ftl_property.o 00:11:24.854 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:24.854 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:24.854 SYMLINK libspdk_nvmf.so 00:11:24.854 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:24.854 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:24.854 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:24.854 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:24.854 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:24.854 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:25.112 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:25.112 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:25.112 CC lib/ftl/base/ftl_base_dev.o 00:11:25.112 CC lib/ftl/base/ftl_base_bdev.o 00:11:25.112 CC lib/ftl/ftl_trace.o 00:11:25.371 LIB libspdk_ftl.a 00:11:25.630 SO libspdk_ftl.so.9.0 00:11:25.888 SYMLINK libspdk_ftl.so 00:11:26.147 CC module/env_dpdk/env_dpdk_rpc.o 00:11:26.406 CC module/accel/dsa/accel_dsa.o 00:11:26.406 CC module/accel/error/accel_error.o 00:11:26.406 CC module/accel/iaa/accel_iaa.o 00:11:26.406 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:26.406 CC module/keyring/file/keyring.o 00:11:26.406 CC module/accel/ioat/accel_ioat.o 00:11:26.406 CC module/sock/posix/posix.o 00:11:26.406 CC module/blob/bdev/blob_bdev.o 00:11:26.406 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:26.406 LIB libspdk_env_dpdk_rpc.a 00:11:26.406 SO libspdk_env_dpdk_rpc.so.6.0 00:11:26.406 SYMLINK libspdk_env_dpdk_rpc.so 00:11:26.406 CC module/accel/iaa/accel_iaa_rpc.o 00:11:26.406 CC module/keyring/file/keyring_rpc.o 00:11:26.406 LIB libspdk_scheduler_dpdk_governor.a 00:11:26.406 CC module/accel/error/accel_error_rpc.o 00:11:26.406 CC module/accel/ioat/accel_ioat_rpc.o 00:11:26.406 LIB libspdk_scheduler_dynamic.a 00:11:26.406 SO libspdk_scheduler_dpdk_governor.so.4.0 00:11:26.665 SO libspdk_scheduler_dynamic.so.4.0 00:11:26.665 CC module/accel/dsa/accel_dsa_rpc.o 00:11:26.665 LIB libspdk_blob_bdev.a 00:11:26.665 SYMLINK libspdk_scheduler_dpdk_governor.so 00:11:26.665 LIB libspdk_accel_iaa.a 00:11:26.665 LIB libspdk_keyring_file.a 00:11:26.665 SO libspdk_blob_bdev.so.11.0 00:11:26.665 SYMLINK libspdk_scheduler_dynamic.so 00:11:26.665 LIB libspdk_accel_error.a 00:11:26.665 LIB libspdk_accel_ioat.a 00:11:26.665 SO libspdk_accel_iaa.so.3.0 00:11:26.665 SO libspdk_keyring_file.so.1.0 00:11:26.665 SO libspdk_accel_error.so.2.0 00:11:26.665 SYMLINK libspdk_blob_bdev.so 00:11:26.665 LIB libspdk_accel_dsa.a 00:11:26.665 SO libspdk_accel_ioat.so.6.0 00:11:26.665 SYMLINK libspdk_keyring_file.so 00:11:26.665 SYMLINK libspdk_accel_iaa.so 00:11:26.665 CC module/sock/uring/uring.o 00:11:26.665 SO libspdk_accel_dsa.so.5.0 00:11:26.665 SYMLINK libspdk_accel_error.so 00:11:26.665 SYMLINK libspdk_accel_ioat.so 00:11:26.665 CC module/scheduler/gscheduler/gscheduler.o 00:11:26.665 SYMLINK libspdk_accel_dsa.so 00:11:26.923 LIB libspdk_scheduler_gscheduler.a 00:11:26.923 CC module/bdev/error/vbdev_error.o 00:11:26.923 CC module/bdev/lvol/vbdev_lvol.o 00:11:26.923 CC module/bdev/malloc/bdev_malloc.o 00:11:26.923 SO libspdk_scheduler_gscheduler.so.4.0 00:11:26.923 CC module/bdev/delay/vbdev_delay.o 00:11:26.923 CC module/bdev/null/bdev_null.o 00:11:26.923 CC module/bdev/gpt/gpt.o 00:11:26.923 CC module/blobfs/bdev/blobfs_bdev.o 00:11:26.923 LIB libspdk_sock_posix.a 00:11:26.923 SO libspdk_sock_posix.so.6.0 00:11:27.183 SYMLINK libspdk_scheduler_gscheduler.so 00:11:27.183 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:27.183 SYMLINK libspdk_sock_posix.so 00:11:27.183 CC module/bdev/error/vbdev_error_rpc.o 00:11:27.183 CC module/bdev/gpt/vbdev_gpt.o 00:11:27.183 LIB libspdk_blobfs_bdev.a 00:11:27.183 CC module/bdev/null/bdev_null_rpc.o 00:11:27.183 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:27.183 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:27.183 LIB libspdk_bdev_error.a 00:11:27.183 SO libspdk_blobfs_bdev.so.6.0 00:11:27.183 LIB libspdk_sock_uring.a 00:11:27.183 SO libspdk_bdev_error.so.6.0 00:11:27.442 CC module/bdev/nvme/bdev_nvme.o 00:11:27.442 SO libspdk_sock_uring.so.5.0 00:11:27.442 CC module/bdev/passthru/vbdev_passthru.o 00:11:27.442 SYMLINK libspdk_blobfs_bdev.so 00:11:27.442 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:27.442 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:27.442 SYMLINK libspdk_bdev_error.so 00:11:27.442 LIB libspdk_bdev_gpt.a 00:11:27.442 SYMLINK libspdk_sock_uring.so 00:11:27.442 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:27.442 LIB libspdk_bdev_malloc.a 00:11:27.442 LIB libspdk_bdev_null.a 00:11:27.442 LIB libspdk_bdev_delay.a 00:11:27.442 SO libspdk_bdev_gpt.so.6.0 00:11:27.442 SO libspdk_bdev_malloc.so.6.0 00:11:27.442 SO libspdk_bdev_null.so.6.0 00:11:27.442 SO libspdk_bdev_delay.so.6.0 00:11:27.442 SYMLINK libspdk_bdev_gpt.so 00:11:27.442 SYMLINK libspdk_bdev_malloc.so 00:11:27.442 SYMLINK libspdk_bdev_null.so 00:11:27.442 CC module/bdev/nvme/nvme_rpc.o 00:11:27.442 SYMLINK libspdk_bdev_delay.so 00:11:27.442 CC module/bdev/nvme/bdev_mdns_client.o 00:11:27.442 CC module/bdev/nvme/vbdev_opal.o 00:11:27.442 CC module/bdev/raid/bdev_raid.o 00:11:27.701 LIB libspdk_bdev_passthru.a 00:11:27.701 SO libspdk_bdev_passthru.so.6.0 00:11:27.701 LIB libspdk_bdev_lvol.a 00:11:27.701 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:27.701 CC module/bdev/split/vbdev_split.o 00:11:27.701 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:27.701 SO libspdk_bdev_lvol.so.6.0 00:11:27.701 SYMLINK libspdk_bdev_passthru.so 00:11:27.701 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:27.701 CC module/bdev/split/vbdev_split_rpc.o 00:11:27.701 SYMLINK libspdk_bdev_lvol.so 00:11:27.701 CC module/bdev/raid/bdev_raid_rpc.o 00:11:27.958 LIB libspdk_bdev_split.a 00:11:27.958 SO libspdk_bdev_split.so.6.0 00:11:27.958 CC module/bdev/uring/bdev_uring.o 00:11:27.958 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:27.958 CC module/bdev/raid/bdev_raid_sb.o 00:11:27.958 CC module/bdev/raid/raid0.o 00:11:27.958 SYMLINK libspdk_bdev_split.so 00:11:27.958 CC module/bdev/raid/raid1.o 00:11:27.958 CC module/bdev/aio/bdev_aio.o 00:11:27.958 CC module/bdev/ftl/bdev_ftl.o 00:11:27.958 CC module/bdev/iscsi/bdev_iscsi.o 00:11:27.958 LIB libspdk_bdev_zone_block.a 00:11:28.217 SO libspdk_bdev_zone_block.so.6.0 00:11:28.217 SYMLINK libspdk_bdev_zone_block.so 00:11:28.217 CC module/bdev/raid/concat.o 00:11:28.217 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:28.217 CC module/bdev/uring/bdev_uring_rpc.o 00:11:28.217 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:28.217 CC module/bdev/aio/bdev_aio_rpc.o 00:11:28.217 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:28.217 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:28.217 LIB libspdk_bdev_uring.a 00:11:28.475 LIB libspdk_bdev_ftl.a 00:11:28.475 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:28.475 SO libspdk_bdev_uring.so.6.0 00:11:28.475 LIB libspdk_bdev_iscsi.a 00:11:28.476 LIB libspdk_bdev_raid.a 00:11:28.476 SO libspdk_bdev_ftl.so.6.0 00:11:28.476 SYMLINK libspdk_bdev_uring.so 00:11:28.476 SO libspdk_bdev_iscsi.so.6.0 00:11:28.476 LIB libspdk_bdev_aio.a 00:11:28.476 SYMLINK libspdk_bdev_ftl.so 00:11:28.476 SO libspdk_bdev_raid.so.6.0 00:11:28.476 SO libspdk_bdev_aio.so.6.0 00:11:28.476 SYMLINK libspdk_bdev_iscsi.so 00:11:28.476 SYMLINK libspdk_bdev_aio.so 00:11:28.476 SYMLINK libspdk_bdev_raid.so 00:11:28.735 LIB libspdk_bdev_virtio.a 00:11:28.735 SO libspdk_bdev_virtio.so.6.0 00:11:28.993 SYMLINK libspdk_bdev_virtio.so 00:11:29.252 LIB libspdk_bdev_nvme.a 00:11:29.252 SO libspdk_bdev_nvme.so.7.0 00:11:29.513 SYMLINK libspdk_bdev_nvme.so 00:11:30.082 CC module/event/subsystems/sock/sock.o 00:11:30.082 CC module/event/subsystems/keyring/keyring.o 00:11:30.082 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:30.082 CC module/event/subsystems/iobuf/iobuf.o 00:11:30.082 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:30.082 CC module/event/subsystems/scheduler/scheduler.o 00:11:30.082 CC module/event/subsystems/vmd/vmd.o 00:11:30.082 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:30.082 LIB libspdk_event_keyring.a 00:11:30.082 LIB libspdk_event_scheduler.a 00:11:30.082 LIB libspdk_event_iobuf.a 00:11:30.082 LIB libspdk_event_sock.a 00:11:30.082 SO libspdk_event_keyring.so.1.0 00:11:30.082 LIB libspdk_event_vhost_blk.a 00:11:30.082 SO libspdk_event_scheduler.so.4.0 00:11:30.082 SO libspdk_event_sock.so.5.0 00:11:30.082 LIB libspdk_event_vmd.a 00:11:30.082 SO libspdk_event_iobuf.so.3.0 00:11:30.340 SO libspdk_event_vhost_blk.so.3.0 00:11:30.340 SYMLINK libspdk_event_keyring.so 00:11:30.340 SYMLINK libspdk_event_scheduler.so 00:11:30.340 SO libspdk_event_vmd.so.6.0 00:11:30.340 SYMLINK libspdk_event_sock.so 00:11:30.340 SYMLINK libspdk_event_iobuf.so 00:11:30.340 SYMLINK libspdk_event_vhost_blk.so 00:11:30.340 SYMLINK libspdk_event_vmd.so 00:11:30.598 CC module/event/subsystems/accel/accel.o 00:11:30.857 LIB libspdk_event_accel.a 00:11:30.857 SO libspdk_event_accel.so.6.0 00:11:30.857 SYMLINK libspdk_event_accel.so 00:11:31.424 CC module/event/subsystems/bdev/bdev.o 00:11:31.424 LIB libspdk_event_bdev.a 00:11:31.424 SO libspdk_event_bdev.so.6.0 00:11:31.683 SYMLINK libspdk_event_bdev.so 00:11:31.942 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:11:31.942 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:11:31.942 CC module/event/subsystems/scsi/scsi.o 00:11:31.942 CC module/event/subsystems/ublk/ublk.o 00:11:31.942 CC module/event/subsystems/nbd/nbd.o 00:11:31.942 LIB libspdk_event_scsi.a 00:11:31.942 LIB libspdk_event_nvmf.a 00:11:31.942 LIB libspdk_event_ublk.a 00:11:31.942 LIB libspdk_event_nbd.a 00:11:31.942 SO libspdk_event_scsi.so.6.0 00:11:32.201 SO libspdk_event_nbd.so.6.0 00:11:32.201 SO libspdk_event_nvmf.so.6.0 00:11:32.201 SO libspdk_event_ublk.so.3.0 00:11:32.201 SYMLINK libspdk_event_nbd.so 00:11:32.201 SYMLINK libspdk_event_nvmf.so 00:11:32.201 SYMLINK libspdk_event_scsi.so 00:11:32.201 SYMLINK libspdk_event_ublk.so 00:11:32.459 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:11:32.459 CC module/event/subsystems/iscsi/iscsi.o 00:11:32.718 LIB libspdk_event_vhost_scsi.a 00:11:32.718 LIB libspdk_event_iscsi.a 00:11:32.718 SO libspdk_event_vhost_scsi.so.3.0 00:11:32.718 SO libspdk_event_iscsi.so.6.0 00:11:32.718 SYMLINK libspdk_event_vhost_scsi.so 00:11:32.718 SYMLINK libspdk_event_iscsi.so 00:11:32.977 SO libspdk.so.6.0 00:11:32.977 SYMLINK libspdk.so 00:11:33.236 TEST_HEADER include/spdk/accel.h 00:11:33.236 TEST_HEADER include/spdk/accel_module.h 00:11:33.236 TEST_HEADER include/spdk/assert.h 00:11:33.236 TEST_HEADER include/spdk/barrier.h 00:11:33.236 CXX app/trace/trace.o 00:11:33.236 TEST_HEADER include/spdk/base64.h 00:11:33.495 TEST_HEADER include/spdk/bdev.h 00:11:33.495 TEST_HEADER include/spdk/bdev_module.h 00:11:33.495 TEST_HEADER include/spdk/bdev_zone.h 00:11:33.495 TEST_HEADER include/spdk/bit_array.h 00:11:33.495 TEST_HEADER include/spdk/bit_pool.h 00:11:33.495 TEST_HEADER include/spdk/blob_bdev.h 00:11:33.495 TEST_HEADER include/spdk/blobfs_bdev.h 00:11:33.495 TEST_HEADER include/spdk/blobfs.h 00:11:33.495 TEST_HEADER include/spdk/blob.h 00:11:33.495 TEST_HEADER include/spdk/conf.h 00:11:33.495 TEST_HEADER include/spdk/config.h 00:11:33.495 TEST_HEADER include/spdk/cpuset.h 00:11:33.495 TEST_HEADER include/spdk/crc16.h 00:11:33.495 TEST_HEADER include/spdk/crc32.h 00:11:33.495 TEST_HEADER include/spdk/crc64.h 00:11:33.495 TEST_HEADER include/spdk/dif.h 00:11:33.495 TEST_HEADER include/spdk/dma.h 00:11:33.495 TEST_HEADER include/spdk/endian.h 00:11:33.495 TEST_HEADER include/spdk/env_dpdk.h 00:11:33.495 TEST_HEADER include/spdk/env.h 00:11:33.495 TEST_HEADER include/spdk/event.h 00:11:33.495 TEST_HEADER include/spdk/fd_group.h 00:11:33.495 TEST_HEADER include/spdk/fd.h 00:11:33.495 TEST_HEADER include/spdk/file.h 00:11:33.495 TEST_HEADER include/spdk/ftl.h 00:11:33.495 TEST_HEADER include/spdk/gpt_spec.h 00:11:33.495 TEST_HEADER include/spdk/hexlify.h 00:11:33.495 TEST_HEADER include/spdk/histogram_data.h 00:11:33.495 TEST_HEADER include/spdk/idxd.h 00:11:33.495 TEST_HEADER include/spdk/idxd_spec.h 00:11:33.495 TEST_HEADER include/spdk/init.h 00:11:33.495 TEST_HEADER include/spdk/ioat.h 00:11:33.495 TEST_HEADER include/spdk/ioat_spec.h 00:11:33.495 TEST_HEADER include/spdk/iscsi_spec.h 00:11:33.495 TEST_HEADER include/spdk/json.h 00:11:33.495 TEST_HEADER include/spdk/jsonrpc.h 00:11:33.495 TEST_HEADER include/spdk/keyring.h 00:11:33.495 CC examples/accel/perf/accel_perf.o 00:11:33.495 TEST_HEADER include/spdk/keyring_module.h 00:11:33.495 TEST_HEADER include/spdk/likely.h 00:11:33.495 CC test/event/event_perf/event_perf.o 00:11:33.495 TEST_HEADER include/spdk/log.h 00:11:33.495 TEST_HEADER include/spdk/lvol.h 00:11:33.495 TEST_HEADER include/spdk/memory.h 00:11:33.495 TEST_HEADER include/spdk/mmio.h 00:11:33.495 TEST_HEADER include/spdk/nbd.h 00:11:33.495 TEST_HEADER include/spdk/notify.h 00:11:33.495 TEST_HEADER include/spdk/nvme.h 00:11:33.495 CC test/app/bdev_svc/bdev_svc.o 00:11:33.495 TEST_HEADER include/spdk/nvme_intel.h 00:11:33.495 CC test/dma/test_dma/test_dma.o 00:11:33.495 CC test/accel/dif/dif.o 00:11:33.495 TEST_HEADER include/spdk/nvme_ocssd.h 00:11:33.495 CC test/blobfs/mkfs/mkfs.o 00:11:33.495 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:11:33.495 TEST_HEADER include/spdk/nvme_spec.h 00:11:33.495 TEST_HEADER include/spdk/nvme_zns.h 00:11:33.495 TEST_HEADER include/spdk/nvmf_cmd.h 00:11:33.495 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:11:33.495 TEST_HEADER include/spdk/nvmf.h 00:11:33.495 TEST_HEADER include/spdk/nvmf_spec.h 00:11:33.495 TEST_HEADER include/spdk/nvmf_transport.h 00:11:33.495 CC test/bdev/bdevio/bdevio.o 00:11:33.495 TEST_HEADER include/spdk/opal.h 00:11:33.495 TEST_HEADER include/spdk/opal_spec.h 00:11:33.495 TEST_HEADER include/spdk/pci_ids.h 00:11:33.495 TEST_HEADER include/spdk/pipe.h 00:11:33.495 TEST_HEADER include/spdk/queue.h 00:11:33.495 TEST_HEADER include/spdk/reduce.h 00:11:33.495 TEST_HEADER include/spdk/rpc.h 00:11:33.495 TEST_HEADER include/spdk/scheduler.h 00:11:33.495 TEST_HEADER include/spdk/scsi.h 00:11:33.495 TEST_HEADER include/spdk/scsi_spec.h 00:11:33.495 TEST_HEADER include/spdk/sock.h 00:11:33.495 TEST_HEADER include/spdk/stdinc.h 00:11:33.495 TEST_HEADER include/spdk/string.h 00:11:33.495 TEST_HEADER include/spdk/thread.h 00:11:33.495 TEST_HEADER include/spdk/trace.h 00:11:33.495 TEST_HEADER include/spdk/trace_parser.h 00:11:33.495 TEST_HEADER include/spdk/tree.h 00:11:33.495 CC test/env/mem_callbacks/mem_callbacks.o 00:11:33.495 TEST_HEADER include/spdk/ublk.h 00:11:33.495 TEST_HEADER include/spdk/util.h 00:11:33.495 TEST_HEADER include/spdk/uuid.h 00:11:33.495 TEST_HEADER include/spdk/version.h 00:11:33.495 TEST_HEADER include/spdk/vfio_user_pci.h 00:11:33.495 TEST_HEADER include/spdk/vfio_user_spec.h 00:11:33.495 TEST_HEADER include/spdk/vhost.h 00:11:33.495 TEST_HEADER include/spdk/vmd.h 00:11:33.495 TEST_HEADER include/spdk/xor.h 00:11:33.495 TEST_HEADER include/spdk/zipf.h 00:11:33.495 CXX test/cpp_headers/accel.o 00:11:33.495 LINK event_perf 00:11:33.752 LINK bdev_svc 00:11:33.752 LINK mkfs 00:11:33.752 LINK spdk_trace 00:11:33.752 CXX test/cpp_headers/accel_module.o 00:11:33.752 CC test/event/reactor/reactor.o 00:11:33.752 LINK test_dma 00:11:33.752 LINK accel_perf 00:11:33.752 LINK bdevio 00:11:34.008 LINK dif 00:11:34.008 CXX test/cpp_headers/assert.o 00:11:34.008 LINK reactor 00:11:34.008 CC test/event/reactor_perf/reactor_perf.o 00:11:34.008 CC app/trace_record/trace_record.o 00:11:34.008 CXX test/cpp_headers/barrier.o 00:11:34.008 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:11:34.008 LINK mem_callbacks 00:11:34.266 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:11:34.266 LINK reactor_perf 00:11:34.266 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:11:34.266 CXX test/cpp_headers/base64.o 00:11:34.266 CC app/nvmf_tgt/nvmf_main.o 00:11:34.266 CC examples/bdev/hello_world/hello_bdev.o 00:11:34.266 CC app/iscsi_tgt/iscsi_tgt.o 00:11:34.266 LINK spdk_trace_record 00:11:34.266 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:11:34.266 CXX test/cpp_headers/bdev.o 00:11:34.266 CC test/env/vtophys/vtophys.o 00:11:34.266 LINK nvmf_tgt 00:11:34.525 CC test/event/app_repeat/app_repeat.o 00:11:34.525 LINK nvme_fuzz 00:11:34.525 LINK iscsi_tgt 00:11:34.525 LINK hello_bdev 00:11:34.525 CC test/app/histogram_perf/histogram_perf.o 00:11:34.525 LINK vtophys 00:11:34.525 CXX test/cpp_headers/bdev_module.o 00:11:34.525 LINK app_repeat 00:11:34.525 LINK histogram_perf 00:11:34.783 LINK vhost_fuzz 00:11:34.783 CC test/app/jsoncat/jsoncat.o 00:11:34.783 CC test/event/scheduler/scheduler.o 00:11:34.783 CXX test/cpp_headers/bdev_zone.o 00:11:34.783 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:11:34.783 CC examples/bdev/bdevperf/bdevperf.o 00:11:34.783 CC app/spdk_tgt/spdk_tgt.o 00:11:34.783 LINK jsoncat 00:11:34.783 CC app/spdk_lspci/spdk_lspci.o 00:11:34.783 CXX test/cpp_headers/bit_array.o 00:11:34.783 CC test/app/stub/stub.o 00:11:35.047 LINK env_dpdk_post_init 00:11:35.048 LINK scheduler 00:11:35.048 LINK spdk_lspci 00:11:35.048 CXX test/cpp_headers/bit_pool.o 00:11:35.048 CC test/env/memory/memory_ut.o 00:11:35.048 LINK spdk_tgt 00:11:35.048 LINK stub 00:11:35.048 CXX test/cpp_headers/blob_bdev.o 00:11:35.305 CC test/lvol/esnap/esnap.o 00:11:35.305 CC test/rpc_client/rpc_client_test.o 00:11:35.305 CC examples/blob/hello_world/hello_blob.o 00:11:35.305 CC test/env/pci/pci_ut.o 00:11:35.305 CC app/spdk_nvme_perf/perf.o 00:11:35.305 CC test/nvme/aer/aer.o 00:11:35.305 CXX test/cpp_headers/blobfs_bdev.o 00:11:35.305 LINK rpc_client_test 00:11:35.564 LINK bdevperf 00:11:35.564 LINK hello_blob 00:11:35.564 CXX test/cpp_headers/blobfs.o 00:11:35.564 LINK aer 00:11:35.564 LINK iscsi_fuzz 00:11:35.564 CC test/nvme/reset/reset.o 00:11:35.564 LINK pci_ut 00:11:35.564 CXX test/cpp_headers/blob.o 00:11:35.822 LINK memory_ut 00:11:35.822 CXX test/cpp_headers/conf.o 00:11:35.822 CC examples/blob/cli/blobcli.o 00:11:35.822 CC examples/ioat/perf/perf.o 00:11:35.822 LINK reset 00:11:35.822 CC examples/nvme/hello_world/hello_world.o 00:11:35.822 CC examples/nvme/reconnect/reconnect.o 00:11:35.822 CXX test/cpp_headers/config.o 00:11:35.822 CC app/spdk_nvme_identify/identify.o 00:11:36.080 CXX test/cpp_headers/cpuset.o 00:11:36.080 CC app/spdk_nvme_discover/discovery_aer.o 00:11:36.080 LINK ioat_perf 00:11:36.080 LINK spdk_nvme_perf 00:11:36.080 CC test/nvme/sgl/sgl.o 00:11:36.080 CXX test/cpp_headers/crc16.o 00:11:36.080 LINK hello_world 00:11:36.080 LINK blobcli 00:11:36.337 LINK spdk_nvme_discover 00:11:36.337 LINK reconnect 00:11:36.337 CXX test/cpp_headers/crc32.o 00:11:36.337 CC examples/ioat/verify/verify.o 00:11:36.337 LINK sgl 00:11:36.337 CC app/spdk_top/spdk_top.o 00:11:36.337 CC test/thread/poller_perf/poller_perf.o 00:11:36.337 CXX test/cpp_headers/crc64.o 00:11:36.595 LINK verify 00:11:36.595 CC examples/nvme/nvme_manage/nvme_manage.o 00:11:36.595 CC test/nvme/e2edp/nvme_dp.o 00:11:36.595 CC app/vhost/vhost.o 00:11:36.595 LINK poller_perf 00:11:36.595 CXX test/cpp_headers/dif.o 00:11:36.595 CC app/spdk_dd/spdk_dd.o 00:11:36.595 LINK spdk_nvme_identify 00:11:36.854 LINK vhost 00:11:36.854 CXX test/cpp_headers/dma.o 00:11:36.854 LINK nvme_dp 00:11:36.854 CC examples/nvme/arbitration/arbitration.o 00:11:36.854 CC app/fio/nvme/fio_plugin.o 00:11:36.854 CXX test/cpp_headers/endian.o 00:11:36.854 LINK nvme_manage 00:11:36.854 CC examples/nvme/hotplug/hotplug.o 00:11:37.113 CC examples/nvme/cmb_copy/cmb_copy.o 00:11:37.113 CC test/nvme/overhead/overhead.o 00:11:37.113 CXX test/cpp_headers/env_dpdk.o 00:11:37.113 LINK spdk_dd 00:11:37.113 LINK spdk_top 00:11:37.113 CC examples/nvme/abort/abort.o 00:11:37.113 LINK arbitration 00:11:37.113 LINK hotplug 00:11:37.113 LINK cmb_copy 00:11:37.113 CXX test/cpp_headers/env.o 00:11:37.372 LINK overhead 00:11:37.372 LINK spdk_nvme 00:11:37.372 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:11:37.372 CXX test/cpp_headers/event.o 00:11:37.372 CC app/fio/bdev/fio_plugin.o 00:11:37.372 CC examples/sock/hello_world/hello_sock.o 00:11:37.631 LINK abort 00:11:37.631 CC test/nvme/err_injection/err_injection.o 00:11:37.631 CC examples/vmd/lsvmd/lsvmd.o 00:11:37.631 CC test/nvme/startup/startup.o 00:11:37.631 LINK pmr_persistence 00:11:37.631 CXX test/cpp_headers/fd_group.o 00:11:37.631 CC examples/nvmf/nvmf/nvmf.o 00:11:37.631 LINK lsvmd 00:11:37.631 CXX test/cpp_headers/fd.o 00:11:37.631 LINK err_injection 00:11:37.631 LINK startup 00:11:37.631 CXX test/cpp_headers/file.o 00:11:37.631 LINK hello_sock 00:11:37.888 CC test/nvme/reserve/reserve.o 00:11:37.888 LINK nvmf 00:11:37.888 CXX test/cpp_headers/ftl.o 00:11:37.888 CC examples/vmd/led/led.o 00:11:37.888 CC test/nvme/simple_copy/simple_copy.o 00:11:37.888 LINK spdk_bdev 00:11:37.888 CC test/nvme/connect_stress/connect_stress.o 00:11:37.888 LINK reserve 00:11:38.145 CC examples/util/zipf/zipf.o 00:11:38.145 CXX test/cpp_headers/gpt_spec.o 00:11:38.145 CC examples/thread/thread/thread_ex.o 00:11:38.145 CXX test/cpp_headers/hexlify.o 00:11:38.145 LINK led 00:11:38.145 LINK connect_stress 00:11:38.145 LINK simple_copy 00:11:38.145 LINK zipf 00:11:38.145 CC examples/idxd/perf/perf.o 00:11:38.145 CXX test/cpp_headers/histogram_data.o 00:11:38.403 CC test/nvme/boot_partition/boot_partition.o 00:11:38.403 CC test/nvme/compliance/nvme_compliance.o 00:11:38.403 LINK thread 00:11:38.403 CXX test/cpp_headers/idxd.o 00:11:38.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:11:38.403 CC test/nvme/fused_ordering/fused_ordering.o 00:11:38.403 CC test/nvme/doorbell_aers/doorbell_aers.o 00:11:38.403 LINK boot_partition 00:11:38.403 CC test/nvme/fdp/fdp.o 00:11:38.403 CXX test/cpp_headers/idxd_spec.o 00:11:38.661 CXX test/cpp_headers/init.o 00:11:38.661 LINK idxd_perf 00:11:38.661 LINK interrupt_tgt 00:11:38.661 LINK nvme_compliance 00:11:38.661 LINK doorbell_aers 00:11:38.661 LINK fused_ordering 00:11:38.661 CXX test/cpp_headers/ioat.o 00:11:38.661 CXX test/cpp_headers/ioat_spec.o 00:11:38.661 CXX test/cpp_headers/iscsi_spec.o 00:11:38.661 CXX test/cpp_headers/json.o 00:11:38.661 CC test/nvme/cuse/cuse.o 00:11:38.919 CXX test/cpp_headers/jsonrpc.o 00:11:38.919 CXX test/cpp_headers/keyring.o 00:11:38.919 CXX test/cpp_headers/keyring_module.o 00:11:38.919 LINK fdp 00:11:38.919 CXX test/cpp_headers/likely.o 00:11:38.919 CXX test/cpp_headers/log.o 00:11:38.919 CXX test/cpp_headers/lvol.o 00:11:38.919 CXX test/cpp_headers/memory.o 00:11:38.919 CXX test/cpp_headers/mmio.o 00:11:38.919 CXX test/cpp_headers/nbd.o 00:11:38.919 CXX test/cpp_headers/notify.o 00:11:38.919 CXX test/cpp_headers/nvme.o 00:11:38.919 CXX test/cpp_headers/nvme_intel.o 00:11:38.919 CXX test/cpp_headers/nvme_ocssd.o 00:11:38.919 CXX test/cpp_headers/nvme_ocssd_spec.o 00:11:38.919 CXX test/cpp_headers/nvme_spec.o 00:11:39.177 CXX test/cpp_headers/nvme_zns.o 00:11:39.177 CXX test/cpp_headers/nvmf_cmd.o 00:11:39.177 CXX test/cpp_headers/nvmf_fc_spec.o 00:11:39.177 CXX test/cpp_headers/nvmf.o 00:11:39.177 CXX test/cpp_headers/nvmf_spec.o 00:11:39.177 CXX test/cpp_headers/nvmf_transport.o 00:11:39.177 CXX test/cpp_headers/opal.o 00:11:39.177 CXX test/cpp_headers/opal_spec.o 00:11:39.177 LINK esnap 00:11:39.177 CXX test/cpp_headers/pci_ids.o 00:11:39.177 CXX test/cpp_headers/pipe.o 00:11:39.436 CXX test/cpp_headers/queue.o 00:11:39.436 CXX test/cpp_headers/reduce.o 00:11:39.436 CXX test/cpp_headers/rpc.o 00:11:39.436 CXX test/cpp_headers/scheduler.o 00:11:39.436 CXX test/cpp_headers/scsi.o 00:11:39.436 CXX test/cpp_headers/scsi_spec.o 00:11:39.436 CXX test/cpp_headers/sock.o 00:11:39.436 CXX test/cpp_headers/stdinc.o 00:11:39.436 CXX test/cpp_headers/string.o 00:11:39.436 CXX test/cpp_headers/thread.o 00:11:39.436 CXX test/cpp_headers/trace.o 00:11:39.436 CXX test/cpp_headers/trace_parser.o 00:11:39.436 CXX test/cpp_headers/tree.o 00:11:39.436 CXX test/cpp_headers/ublk.o 00:11:39.436 CXX test/cpp_headers/util.o 00:11:39.695 CXX test/cpp_headers/uuid.o 00:11:39.695 CXX test/cpp_headers/version.o 00:11:39.695 CXX test/cpp_headers/vfio_user_pci.o 00:11:39.695 CXX test/cpp_headers/vfio_user_spec.o 00:11:39.695 CXX test/cpp_headers/vhost.o 00:11:39.695 LINK cuse 00:11:39.695 CXX test/cpp_headers/xor.o 00:11:39.695 CXX test/cpp_headers/vmd.o 00:11:39.695 CXX test/cpp_headers/zipf.o 00:11:39.954 00:11:39.954 real 0m56.734s 00:11:39.954 user 5m5.842s 00:11:39.954 sys 1m37.653s 00:11:39.954 13:49:38 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:11:39.954 13:49:38 make -- common/autotest_common.sh@10 -- $ set +x 00:11:39.954 ************************************ 00:11:39.954 END TEST make 00:11:39.954 ************************************ 00:11:39.954 13:49:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:39.954 13:49:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:11:39.954 13:49:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:11:39.954 13:49:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:39.954 13:49:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:39.954 13:49:38 -- pm/common@44 -- $ pid=5132 00:11:39.954 13:49:38 -- pm/common@50 -- $ kill -TERM 5132 00:11:39.954 13:49:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:39.954 13:49:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:39.954 13:49:38 -- pm/common@44 -- $ pid=5134 00:11:39.954 13:49:38 -- pm/common@50 -- $ kill -TERM 5134 00:11:40.214 13:49:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.214 13:49:38 -- nvmf/common.sh@7 -- # uname -s 00:11:40.214 13:49:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.214 13:49:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.214 13:49:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.214 13:49:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.214 13:49:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.214 13:49:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.214 13:49:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.214 13:49:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.214 13:49:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.214 13:49:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.214 13:49:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:11:40.214 13:49:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:11:40.214 13:49:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.214 13:49:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.214 13:49:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.214 13:49:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.214 13:49:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.214 13:49:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.214 13:49:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.214 13:49:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.214 13:49:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.214 13:49:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.214 13:49:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.214 13:49:38 -- paths/export.sh@5 -- # export PATH 00:11:40.214 13:49:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.214 13:49:38 -- nvmf/common.sh@47 -- # : 0 00:11:40.214 13:49:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.214 13:49:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.214 13:49:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.214 13:49:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.214 13:49:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.214 13:49:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.214 13:49:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.214 13:49:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.214 13:49:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:40.214 13:49:38 -- spdk/autotest.sh@32 -- # uname -s 00:11:40.214 13:49:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:40.214 13:49:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:40.214 13:49:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:40.214 13:49:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:40.214 13:49:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:40.214 13:49:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:40.214 13:49:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:40.214 13:49:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:40.214 13:49:38 -- spdk/autotest.sh@48 -- # udevadm_pid=52121 00:11:40.214 13:49:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:40.214 13:49:38 -- pm/common@17 -- # local monitor 00:11:40.214 13:49:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:40.214 13:49:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:40.214 13:49:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:40.214 13:49:38 -- pm/common@21 -- # date +%s 00:11:40.214 13:49:38 -- pm/common@25 -- # sleep 1 00:11:40.214 13:49:38 -- pm/common@21 -- # date +%s 00:11:40.214 13:49:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715780978 00:11:40.214 13:49:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715780978 00:11:40.214 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715780978_collect-vmstat.pm.log 00:11:40.214 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715780978_collect-cpu-load.pm.log 00:11:41.149 13:49:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:11:41.149 13:49:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:11:41.149 13:49:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.149 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:11:41.149 13:49:39 -- spdk/autotest.sh@59 -- # create_test_list 00:11:41.149 13:49:39 -- common/autotest_common.sh@744 -- # xtrace_disable 00:11:41.149 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:11:41.409 13:49:39 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:11:41.409 13:49:39 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:11:41.409 13:49:39 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:11:41.409 13:49:39 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:11:41.409 13:49:39 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:11:41.409 13:49:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:11:41.409 13:49:39 -- common/autotest_common.sh@1451 -- # uname 00:11:41.409 13:49:39 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:11:41.409 13:49:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:11:41.409 13:49:39 -- common/autotest_common.sh@1471 -- # uname 00:11:41.409 13:49:39 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:11:41.409 13:49:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:11:41.409 13:49:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:11:41.409 13:49:39 -- spdk/autotest.sh@72 -- # hash lcov 00:11:41.409 13:49:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:11:41.409 13:49:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:11:41.409 --rc lcov_branch_coverage=1 00:11:41.409 --rc lcov_function_coverage=1 00:11:41.409 --rc genhtml_branch_coverage=1 00:11:41.409 --rc genhtml_function_coverage=1 00:11:41.409 --rc genhtml_legend=1 00:11:41.409 --rc geninfo_all_blocks=1 00:11:41.409 ' 00:11:41.409 13:49:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:11:41.409 --rc lcov_branch_coverage=1 00:11:41.409 --rc lcov_function_coverage=1 00:11:41.409 --rc genhtml_branch_coverage=1 00:11:41.409 --rc genhtml_function_coverage=1 00:11:41.409 --rc genhtml_legend=1 00:11:41.409 --rc geninfo_all_blocks=1 00:11:41.409 ' 00:11:41.409 13:49:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:11:41.409 --rc lcov_branch_coverage=1 00:11:41.409 --rc lcov_function_coverage=1 00:11:41.409 --rc genhtml_branch_coverage=1 00:11:41.409 --rc genhtml_function_coverage=1 00:11:41.409 --rc genhtml_legend=1 00:11:41.409 --rc geninfo_all_blocks=1 00:11:41.409 --no-external' 00:11:41.409 13:49:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:11:41.409 --rc lcov_branch_coverage=1 00:11:41.409 --rc lcov_function_coverage=1 00:11:41.409 --rc genhtml_branch_coverage=1 00:11:41.409 --rc genhtml_function_coverage=1 00:11:41.409 --rc genhtml_legend=1 00:11:41.409 --rc geninfo_all_blocks=1 00:11:41.409 --no-external' 00:11:41.409 13:49:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:11:41.409 lcov: LCOV version 1.14 00:11:41.409 13:49:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:11:49.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:11:49.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:11:49.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:11:49.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:11:49.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:11:49.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:11:56.197 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:11:56.197 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:12:08.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:12:08.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:12:08.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:12:08.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:12:11.755 13:50:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:12:11.755 13:50:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:11.755 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:12:11.755 13:50:09 -- spdk/autotest.sh@91 -- # rm -f 00:12:11.755 13:50:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:12.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.326 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:12:12.326 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:12:12.326 13:50:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:12:12.326 13:50:10 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:12:12.326 13:50:10 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:12:12.326 13:50:10 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:12:12.326 13:50:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:12.326 13:50:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:12:12.326 13:50:10 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:12:12.326 13:50:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:12.326 13:50:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:12:12.326 13:50:10 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:12:12.326 13:50:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:12.326 13:50:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:12:12.326 13:50:10 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:12:12.326 13:50:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:12.326 13:50:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:12:12.326 13:50:10 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:12:12.326 13:50:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:12.326 13:50:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:12.326 13:50:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:12:12.326 13:50:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:12.326 13:50:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:12.326 13:50:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:12:12.326 13:50:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:12:12.326 13:50:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:12.326 No valid GPT data, bailing 00:12:12.326 13:50:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:12.326 13:50:10 -- scripts/common.sh@391 -- # pt= 00:12:12.326 13:50:10 -- scripts/common.sh@392 -- # return 1 00:12:12.326 13:50:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:12:12.326 1+0 records in 00:12:12.326 1+0 records out 00:12:12.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582071 s, 180 MB/s 00:12:12.326 13:50:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:12.326 13:50:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:12.326 13:50:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:12:12.326 13:50:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:12:12.326 13:50:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:12:12.621 No valid GPT data, bailing 00:12:12.621 13:50:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:12.621 13:50:10 -- scripts/common.sh@391 -- # pt= 00:12:12.621 13:50:10 -- scripts/common.sh@392 -- # return 1 00:12:12.621 13:50:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:12:12.621 1+0 records in 00:12:12.621 1+0 records out 00:12:12.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655663 s, 160 MB/s 00:12:12.621 13:50:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:12.621 13:50:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:12.621 13:50:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:12:12.621 13:50:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:12:12.621 13:50:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:12:12.621 No valid GPT data, bailing 00:12:12.621 13:50:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:12:12.621 13:50:10 -- scripts/common.sh@391 -- # pt= 00:12:12.621 13:50:10 -- scripts/common.sh@392 -- # return 1 00:12:12.621 13:50:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:12:12.621 1+0 records in 00:12:12.621 1+0 records out 00:12:12.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00188489 s, 556 MB/s 00:12:12.621 13:50:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:12.621 13:50:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:12.621 13:50:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:12:12.621 13:50:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:12:12.621 13:50:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:12:12.621 No valid GPT data, bailing 00:12:12.621 13:50:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:12:12.621 13:50:11 -- scripts/common.sh@391 -- # pt= 00:12:12.621 13:50:11 -- scripts/common.sh@392 -- # return 1 00:12:12.621 13:50:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:12:12.621 1+0 records in 00:12:12.621 1+0 records out 00:12:12.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721234 s, 145 MB/s 00:12:12.621 13:50:11 -- spdk/autotest.sh@118 -- # sync 00:12:12.914 13:50:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:12:12.914 13:50:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:12:12.914 13:50:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:12:15.443 13:50:13 -- spdk/autotest.sh@124 -- # uname -s 00:12:15.443 13:50:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:12:15.443 13:50:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:15.443 13:50:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:15.443 13:50:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:15.443 13:50:13 -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 ************************************ 00:12:15.443 START TEST setup.sh 00:12:15.443 ************************************ 00:12:15.443 13:50:13 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:15.702 * Looking for test storage... 00:12:15.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:15.702 13:50:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:12:15.702 13:50:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:12:15.702 13:50:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:15.702 13:50:14 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:15.702 13:50:14 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:15.702 13:50:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:15.702 ************************************ 00:12:15.702 START TEST acl 00:12:15.702 ************************************ 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:15.702 * Looking for test storage... 00:12:15.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:15.702 13:50:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:12:15.702 13:50:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:12:15.702 13:50:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:15.702 13:50:14 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:16.663 13:50:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:12:16.663 13:50:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:12:16.663 13:50:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:16.663 13:50:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:12:16.663 13:50:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:12:16.663 13:50:15 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.638 Hugepages 00:12:17.638 node hugesize free / total 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.638 00:12:17.638 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:17.638 13:50:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.638 13:50:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:12:17.638 13:50:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:12:17.638 13:50:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:17.638 13:50:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.638 13:50:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:12:17.639 13:50:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:17.639 13:50:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:17.639 13:50:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:17.639 13:50:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:17.639 13:50:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:12:17.913 13:50:16 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:12:17.913 13:50:16 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:17.913 13:50:16 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:17.913 13:50:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:17.913 ************************************ 00:12:17.913 START TEST denied 00:12:17.913 ************************************ 00:12:17.913 13:50:16 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:12:17.913 13:50:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:12:17.913 13:50:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:12:17.913 13:50:16 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:12:17.913 13:50:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.913 13:50:16 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:18.850 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:18.850 13:50:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:19.786 00:12:19.786 real 0m1.877s 00:12:19.786 user 0m0.663s 00:12:19.786 sys 0m1.182s 00:12:19.786 13:50:18 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.786 13:50:18 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:12:19.786 ************************************ 00:12:19.786 END TEST denied 00:12:19.786 ************************************ 00:12:19.786 13:50:18 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:19.786 13:50:18 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:19.787 13:50:18 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.787 13:50:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:19.787 ************************************ 00:12:19.787 START TEST allowed 00:12:19.787 ************************************ 00:12:19.787 13:50:18 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:12:19.787 13:50:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:12:19.787 13:50:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:12:19.787 13:50:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:12:19.787 13:50:18 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:12:19.787 13:50:18 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:20.740 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:20.740 13:50:19 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:21.676 ************************************ 00:12:21.676 END TEST allowed 00:12:21.676 ************************************ 00:12:21.676 00:12:21.676 real 0m1.880s 00:12:21.676 user 0m0.762s 00:12:21.676 sys 0m1.150s 00:12:21.676 13:50:20 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.676 13:50:20 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 00:12:21.676 real 0m6.075s 00:12:21.676 user 0m2.384s 00:12:21.676 sys 0m3.732s 00:12:21.676 13:50:20 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.676 13:50:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 ************************************ 00:12:21.676 END TEST acl 00:12:21.676 ************************************ 00:12:21.676 13:50:20 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:21.676 13:50:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:21.676 13:50:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.676 13:50:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 ************************************ 00:12:21.676 START TEST hugepages 00:12:21.676 ************************************ 00:12:21.676 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:21.936 * Looking for test storage... 00:12:21.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5615804 kB' 'MemAvailable: 7410036 kB' 'Buffers: 2436 kB' 'Cached: 2006956 kB' 'SwapCached: 0 kB' 'Active: 840312 kB' 'Inactive: 1280896 kB' 'Active(anon): 122304 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 113824 kB' 'Mapped: 48552 kB' 'Shmem: 10488 kB' 'KReclaimable: 64536 kB' 'Slab: 140720 kB' 'SReclaimable: 64536 kB' 'SUnreclaim: 76184 kB' 'KernelStack: 6428 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 348388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.937 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:21.938 13:50:20 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:12:21.938 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:21.938 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.938 13:50:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:21.938 ************************************ 00:12:21.938 START TEST default_setup 00:12:21.938 ************************************ 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:21.938 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:12:21.939 13:50:20 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:22.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:22.874 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.874 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7701184 kB' 'MemAvailable: 9495304 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849640 kB' 'Inactive: 1280904 kB' 'Active(anon): 131632 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140568 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76276 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.154 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7701184 kB' 'MemAvailable: 9495304 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849276 kB' 'Inactive: 1280904 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140576 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76284 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.155 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.156 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7701184 kB' 'MemAvailable: 9495308 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849232 kB' 'Inactive: 1280908 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122412 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140572 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76280 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.157 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.158 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:12:23.159 nr_hugepages=1024 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:23.159 resv_hugepages=0 00:12:23.159 surplus_hugepages=0 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:23.159 anon_hugepages=0 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7701184 kB' 'MemAvailable: 9495308 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849164 kB' 'Inactive: 1280908 kB' 'Active(anon): 131156 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140560 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76268 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.159 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.160 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7701184 kB' 'MemUsed: 4540796 kB' 'SwapCached: 0 kB' 'Active: 849172 kB' 'Inactive: 1280908 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 2009384 kB' 'Mapped: 48556 kB' 'AnonPages: 122288 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64292 kB' 'Slab: 140552 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.161 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.162 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:23.423 node0=1024 expecting 1024 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:23.423 00:12:23.423 real 0m1.272s 00:12:23.423 user 0m0.549s 00:12:23.423 sys 0m0.688s 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.423 13:50:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:12:23.423 ************************************ 00:12:23.423 END TEST default_setup 00:12:23.423 ************************************ 00:12:23.423 13:50:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:12:23.423 13:50:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:23.423 13:50:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.423 13:50:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:23.423 ************************************ 00:12:23.423 START TEST per_node_1G_alloc 00:12:23.423 ************************************ 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:23.423 13:50:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:23.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.996 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.996 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8753784 kB' 'MemAvailable: 10547912 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849396 kB' 'Inactive: 1280912 kB' 'Active(anon): 131388 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122500 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140544 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76252 kB' 'KernelStack: 6372 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.996 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.997 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8753784 kB' 'MemAvailable: 10547912 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849004 kB' 'Inactive: 1280912 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140560 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76268 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.998 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:23.999 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.000 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8753784 kB' 'MemAvailable: 10547912 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849264 kB' 'Inactive: 1280912 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64292 kB' 'Slab: 140560 kB' 'SReclaimable: 64292 kB' 'SUnreclaim: 76268 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.001 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.002 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:24.003 nr_hugepages=512 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:24.003 resv_hugepages=0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:24.003 surplus_hugepages=0 00:12:24.003 anon_hugepages=0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.003 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8753784 kB' 'MemAvailable: 10547896 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849072 kB' 'Inactive: 1280912 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122168 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140524 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76260 kB' 'KernelStack: 6400 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.004 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.005 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8753784 kB' 'MemUsed: 3488196 kB' 'SwapCached: 0 kB' 'Active: 849200 kB' 'Inactive: 1280912 kB' 'Active(anon): 131192 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 2009384 kB' 'Mapped: 48816 kB' 'AnonPages: 122372 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64264 kB' 'Slab: 140524 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.006 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:24.007 node0=512 expecting 512 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:24.007 00:12:24.007 real 0m0.707s 00:12:24.007 user 0m0.339s 00:12:24.007 sys 0m0.416s 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:24.007 13:50:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:24.007 ************************************ 00:12:24.007 END TEST per_node_1G_alloc 00:12:24.007 ************************************ 00:12:24.007 13:50:22 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:12:24.007 13:50:22 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:24.007 13:50:22 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.007 13:50:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:24.007 ************************************ 00:12:24.007 START TEST even_2G_alloc 00:12:24.007 ************************************ 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:24.007 13:50:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:24.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.576 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:24.576 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.576 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7709528 kB' 'MemAvailable: 9503644 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849492 kB' 'Inactive: 1280916 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122616 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140740 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76476 kB' 'KernelStack: 6432 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.577 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7709528 kB' 'MemAvailable: 9503644 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849216 kB' 'Inactive: 1280916 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140720 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76456 kB' 'KernelStack: 6400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.578 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.579 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.839 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7710132 kB' 'MemAvailable: 9504248 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849016 kB' 'Inactive: 1280916 kB' 'Active(anon): 131008 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140720 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76456 kB' 'KernelStack: 6416 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.840 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.841 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:24.842 nr_hugepages=1024 00:12:24.842 resv_hugepages=0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:24.842 surplus_hugepages=0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:24.842 anon_hugepages=0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7710132 kB' 'MemAvailable: 9504248 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849276 kB' 'Inactive: 1280916 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140720 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76456 kB' 'KernelStack: 6416 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.842 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.843 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7710132 kB' 'MemUsed: 4531848 kB' 'SwapCached: 0 kB' 'Active: 849216 kB' 'Inactive: 1280916 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 2009388 kB' 'Mapped: 48556 kB' 'AnonPages: 122380 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64264 kB' 'Slab: 140720 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.844 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.845 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:24.846 node0=1024 expecting 1024 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:24.846 00:12:24.846 real 0m0.660s 00:12:24.846 user 0m0.300s 00:12:24.846 sys 0m0.401s 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:24.846 13:50:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 ************************************ 00:12:24.846 END TEST even_2G_alloc 00:12:24.846 ************************************ 00:12:24.846 13:50:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:12:24.846 13:50:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:24.846 13:50:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.846 13:50:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 ************************************ 00:12:24.846 START TEST odd_alloc 00:12:24.846 ************************************ 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:24.846 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:25.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.417 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:25.417 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:25.417 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7710708 kB' 'MemAvailable: 9504824 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849276 kB' 'Inactive: 1280916 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140744 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76480 kB' 'KernelStack: 6432 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.418 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7711604 kB' 'MemAvailable: 9505720 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849492 kB' 'Inactive: 1280916 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 122632 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140744 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76480 kB' 'KernelStack: 6416 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 365592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.419 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.420 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7712104 kB' 'MemAvailable: 9506216 kB' 'Buffers: 2436 kB' 'Cached: 2006948 kB' 'SwapCached: 0 kB' 'Active: 849264 kB' 'Inactive: 1280912 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 122464 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140724 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76460 kB' 'KernelStack: 6384 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.421 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.422 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:12:25.423 nr_hugepages=1025 00:12:25.423 resv_hugepages=0 00:12:25.423 surplus_hugepages=0 00:12:25.423 anon_hugepages=0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7711348 kB' 'MemAvailable: 9505464 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 849216 kB' 'Inactive: 1280916 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 64264 kB' 'Slab: 140724 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76460 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.423 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.424 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.684 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:25.685 13:50:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7711348 kB' 'MemUsed: 4530632 kB' 'SwapCached: 0 kB' 'Active: 849252 kB' 'Inactive: 1280916 kB' 'Active(anon): 131244 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 2009388 kB' 'Mapped: 48556 kB' 'AnonPages: 122384 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64264 kB' 'Slab: 140720 kB' 'SReclaimable: 64264 kB' 'SUnreclaim: 76456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.685 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:25.686 node0=1025 expecting 1025 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:12:25.686 00:12:25.686 real 0m0.770s 00:12:25.686 user 0m0.349s 00:12:25.686 sys 0m0.447s 00:12:25.686 ************************************ 00:12:25.686 END TEST odd_alloc 00:12:25.686 ************************************ 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:25.686 13:50:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:25.686 13:50:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:12:25.686 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:25.686 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:25.686 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:25.686 ************************************ 00:12:25.686 START TEST custom_alloc 00:12:25.686 ************************************ 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:25.686 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:26.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:26.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:26.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:26.280 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8778756 kB' 'MemAvailable: 10572872 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844160 kB' 'Inactive: 1280916 kB' 'Active(anon): 126152 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117544 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140336 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76076 kB' 'KernelStack: 6304 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.281 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:26.282 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8779160 kB' 'MemAvailable: 10573276 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844348 kB' 'Inactive: 1280916 kB' 'Active(anon): 126340 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 117512 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140332 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76072 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.283 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.284 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8779160 kB' 'MemAvailable: 10573276 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844356 kB' 'Inactive: 1280916 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 117512 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140332 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76072 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.285 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.286 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:26.287 nr_hugepages=512 00:12:26.287 resv_hugepages=0 00:12:26.287 surplus_hugepages=0 00:12:26.287 anon_hugepages=0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:26.287 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:26.564 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:26.564 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:26.564 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8779680 kB' 'MemAvailable: 10573796 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844420 kB' 'Inactive: 1280916 kB' 'Active(anon): 126412 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 117576 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140332 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76072 kB' 'KernelStack: 6304 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.565 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8780272 kB' 'MemUsed: 3461708 kB' 'SwapCached: 0 kB' 'Active: 844356 kB' 'Inactive: 1280916 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 2009388 kB' 'Mapped: 47816 kB' 'AnonPages: 117512 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64260 kB' 'Slab: 140332 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.566 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:26.567 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:26.568 node0=512 expecting 512 00:12:26.568 ************************************ 00:12:26.568 END TEST custom_alloc 00:12:26.568 ************************************ 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:26.568 00:12:26.568 real 0m0.808s 00:12:26.568 user 0m0.368s 00:12:26.568 sys 0m0.456s 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.568 13:50:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:26.568 13:50:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:26.568 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:26.568 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.568 13:50:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:26.568 ************************************ 00:12:26.568 START TEST no_shrink_alloc 00:12:26.568 ************************************ 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:26.568 13:50:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:27.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.143 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.143 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.143 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7731520 kB' 'MemAvailable: 9525636 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844372 kB' 'Inactive: 1280916 kB' 'Active(anon): 126364 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117732 kB' 'Mapped: 47808 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140296 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76036 kB' 'KernelStack: 6260 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.144 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7732040 kB' 'MemAvailable: 9526156 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844464 kB' 'Inactive: 1280916 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117612 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140292 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76032 kB' 'KernelStack: 6288 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.145 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.146 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7732040 kB' 'MemAvailable: 9526156 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844460 kB' 'Inactive: 1280916 kB' 'Active(anon): 126452 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117636 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140292 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76032 kB' 'KernelStack: 6304 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.147 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.148 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:27.149 nr_hugepages=1024 00:12:27.149 resv_hugepages=0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:27.149 surplus_hugepages=0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:27.149 anon_hugepages=0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7732040 kB' 'MemAvailable: 9526156 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844388 kB' 'Inactive: 1280916 kB' 'Active(anon): 126380 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117540 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140292 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76032 kB' 'KernelStack: 6288 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.149 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.150 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.410 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.411 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7732040 kB' 'MemUsed: 4509940 kB' 'SwapCached: 0 kB' 'Active: 844192 kB' 'Inactive: 1280916 kB' 'Active(anon): 126184 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2009388 kB' 'Mapped: 47820 kB' 'AnonPages: 117376 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64260 kB' 'Slab: 140292 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.412 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:27.413 node0=1024 expecting 1024 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:27.413 13:50:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:27.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.936 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.936 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.936 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7729436 kB' 'MemAvailable: 9523552 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844828 kB' 'Inactive: 1280916 kB' 'Active(anon): 126820 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117916 kB' 'Mapped: 47956 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140304 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6272 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.936 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.937 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7729184 kB' 'MemAvailable: 9523300 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844676 kB' 'Inactive: 1280916 kB' 'Active(anon): 126668 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117788 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140304 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6284 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.938 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.939 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7729184 kB' 'MemAvailable: 9523300 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844496 kB' 'Inactive: 1280916 kB' 'Active(anon): 126488 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117632 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140304 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6300 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.940 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:27.941 nr_hugepages=1024 00:12:27.941 resv_hugepages=0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:27.941 surplus_hugepages=0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:27.941 anon_hugepages=0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:27.941 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7729184 kB' 'MemAvailable: 9523300 kB' 'Buffers: 2436 kB' 'Cached: 2006952 kB' 'SwapCached: 0 kB' 'Active: 844560 kB' 'Inactive: 1280916 kB' 'Active(anon): 126552 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117612 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 64260 kB' 'Slab: 140304 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6284 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.942 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:27.943 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7729184 kB' 'MemUsed: 4512796 kB' 'SwapCached: 0 kB' 'Active: 844300 kB' 'Inactive: 1280916 kB' 'Active(anon): 126292 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1280916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2009388 kB' 'Mapped: 47852 kB' 'AnonPages: 117612 kB' 'Shmem: 10464 kB' 'KernelStack: 6284 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64260 kB' 'Slab: 140304 kB' 'SReclaimable: 64260 kB' 'SUnreclaim: 76044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.944 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:27.945 node0=1024 expecting 1024 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:27.945 00:12:27.945 real 0m1.501s 00:12:27.945 user 0m0.684s 00:12:27.945 sys 0m0.889s 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.945 13:50:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:27.945 ************************************ 00:12:27.945 END TEST no_shrink_alloc 00:12:27.945 ************************************ 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:28.204 13:50:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:28.204 00:12:28.204 real 0m6.347s 00:12:28.204 user 0m2.804s 00:12:28.204 sys 0m3.696s 00:12:28.204 13:50:26 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.204 13:50:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:28.204 ************************************ 00:12:28.204 END TEST hugepages 00:12:28.204 ************************************ 00:12:28.204 13:50:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:28.204 13:50:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:28.204 13:50:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.204 13:50:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:28.204 ************************************ 00:12:28.204 START TEST driver 00:12:28.204 ************************************ 00:12:28.204 13:50:26 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:28.204 * Looking for test storage... 00:12:28.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:28.204 13:50:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:12:28.204 13:50:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:28.204 13:50:26 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:29.143 13:50:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:29.143 13:50:27 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:29.143 13:50:27 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:29.143 13:50:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:29.143 ************************************ 00:12:29.143 START TEST guess_driver 00:12:29.143 ************************************ 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:29.143 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:29.143 Looking for driver=uio_pci_generic 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:12:29.143 13:50:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:30.096 13:50:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:31.032 00:12:31.032 real 0m1.906s 00:12:31.032 user 0m0.653s 00:12:31.032 sys 0m1.305s 00:12:31.032 13:50:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.032 13:50:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:12:31.032 ************************************ 00:12:31.032 END TEST guess_driver 00:12:31.032 ************************************ 00:12:31.032 ************************************ 00:12:31.032 END TEST driver 00:12:31.032 ************************************ 00:12:31.032 00:12:31.032 real 0m2.870s 00:12:31.032 user 0m1.001s 00:12:31.032 sys 0m2.007s 00:12:31.032 13:50:29 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.032 13:50:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:31.032 13:50:29 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:31.032 13:50:29 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:31.032 13:50:29 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.032 13:50:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:31.032 ************************************ 00:12:31.032 START TEST devices 00:12:31.032 ************************************ 00:12:31.032 13:50:29 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:31.290 * Looking for test storage... 00:12:31.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:31.290 13:50:29 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:31.290 13:50:29 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:12:31.290 13:50:29 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:31.290 13:50:29 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:32.228 13:50:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:32.228 No valid GPT data, bailing 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:12:32.228 No valid GPT data, bailing 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:32.228 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:12:32.228 13:50:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:12:32.488 No valid GPT data, bailing 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:32.488 No valid GPT data, bailing 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:32.488 13:50:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:32.488 13:50:30 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:32.488 13:50:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:32.488 13:50:30 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:32.488 13:50:30 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.488 13:50:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:32.488 ************************************ 00:12:32.488 START TEST nvme_mount 00:12:32.488 ************************************ 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:32.488 13:50:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:33.485 Creating new GPT entries in memory. 00:12:33.485 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:33.485 other utilities. 00:12:33.485 13:50:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:33.485 13:50:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:33.485 13:50:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:33.485 13:50:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:33.485 13:50:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:34.866 Creating new GPT entries in memory. 00:12:34.866 The operation has completed successfully. 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56351 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.866 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:35.125 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.125 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:35.125 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.125 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:35.383 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:35.383 13:50:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:35.642 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:35.642 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:35.642 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:35.642 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:35.642 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.908 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.176 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.176 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.176 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.176 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.176 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:36.177 13:50:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.435 13:50:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.695 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.695 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.695 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.695 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:36.955 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:36.955 00:12:36.955 real 0m4.392s 00:12:36.955 user 0m0.787s 00:12:36.955 sys 0m1.356s 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.955 13:50:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:12:36.955 ************************************ 00:12:36.955 END TEST nvme_mount 00:12:36.955 ************************************ 00:12:36.955 13:50:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:36.955 13:50:35 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:36.955 13:50:35 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.955 13:50:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:36.955 ************************************ 00:12:36.955 START TEST dm_mount 00:12:36.955 ************************************ 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:36.955 13:50:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:38.335 Creating new GPT entries in memory. 00:12:38.335 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:38.335 other utilities. 00:12:38.335 13:50:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:38.335 13:50:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:38.335 13:50:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:38.335 13:50:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:38.335 13:50:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:39.315 Creating new GPT entries in memory. 00:12:39.315 The operation has completed successfully. 00:12:39.315 13:50:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:39.315 13:50:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:39.315 13:50:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:39.315 13:50:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:39.315 13:50:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:40.250 The operation has completed successfully. 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 56791 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:40.250 13:50:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.508 13:50:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.766 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.766 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:40.766 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.766 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:41.025 13:50:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.284 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:41.543 13:50:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:41.543 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:41.543 00:12:41.543 real 0m4.666s 00:12:41.543 user 0m0.618s 00:12:41.543 sys 0m0.967s 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:41.543 13:50:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:41.543 ************************************ 00:12:41.543 END TEST dm_mount 00:12:41.543 ************************************ 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:41.801 13:50:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:42.060 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:42.060 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:42.060 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:42.060 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:42.060 13:50:40 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:42.060 00:12:42.060 real 0m10.863s 00:12:42.060 user 0m2.100s 00:12:42.060 sys 0m3.162s 00:12:42.060 13:50:40 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.060 13:50:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:42.060 ************************************ 00:12:42.060 END TEST devices 00:12:42.060 ************************************ 00:12:42.060 00:12:42.060 real 0m26.566s 00:12:42.060 user 0m8.424s 00:12:42.060 sys 0m12.879s 00:12:42.060 13:50:40 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.060 13:50:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:42.060 ************************************ 00:12:42.060 END TEST setup.sh 00:12:42.060 ************************************ 00:12:42.060 13:50:40 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:42.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.995 Hugepages 00:12:42.995 node hugesize free / total 00:12:42.995 node0 1048576kB 0 / 0 00:12:42.995 node0 2048kB 2048 / 2048 00:12:42.995 00:12:42.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:42.995 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:42.995 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:43.254 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:12:43.254 13:50:41 -- spdk/autotest.sh@130 -- # uname -s 00:12:43.254 13:50:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:43.254 13:50:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:43.254 13:50:41 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.078 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.078 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.078 13:50:42 -- common/autotest_common.sh@1528 -- # sleep 1 00:12:45.455 13:50:43 -- common/autotest_common.sh@1529 -- # bdfs=() 00:12:45.455 13:50:43 -- common/autotest_common.sh@1529 -- # local bdfs 00:12:45.455 13:50:43 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:12:45.455 13:50:43 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:12:45.455 13:50:43 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:45.455 13:50:43 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:45.455 13:50:43 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:45.455 13:50:43 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:45.455 13:50:43 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:45.455 13:50:43 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:12:45.455 13:50:43 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:45.455 13:50:43 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.713 Waiting for block devices as requested 00:12:45.713 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.972 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.972 13:50:44 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:12:45.972 13:50:44 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:12:45.972 13:50:44 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:12:45.972 13:50:44 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1541 -- # grep oacs 00:12:45.972 13:50:44 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:12:45.972 13:50:44 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:12:45.972 13:50:44 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:12:45.972 13:50:44 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:12:45.972 13:50:44 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:12:45.972 13:50:44 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:12:45.972 13:50:44 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:12:45.972 13:50:44 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:12:45.972 13:50:44 -- common/autotest_common.sh@1553 -- # continue 00:12:45.972 13:50:44 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:12:45.972 13:50:44 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:12:45.972 13:50:44 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:45.972 13:50:44 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:12:45.972 13:50:44 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:46.232 13:50:44 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:12:46.232 13:50:44 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:12:46.232 13:50:44 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:12:46.232 13:50:44 -- common/autotest_common.sh@1541 -- # grep oacs 00:12:46.232 13:50:44 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:12:46.232 13:50:44 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:12:46.232 13:50:44 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:12:46.232 13:50:44 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:12:46.232 13:50:44 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:12:46.232 13:50:44 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:12:46.232 13:50:44 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:12:46.232 13:50:44 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:12:46.232 13:50:44 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:12:46.232 13:50:44 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:12:46.232 13:50:44 -- common/autotest_common.sh@1553 -- # continue 00:12:46.232 13:50:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:12:46.232 13:50:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.232 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.232 13:50:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:12:46.232 13:50:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:46.232 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.232 13:50:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:46.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.058 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.058 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:47.058 13:50:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:12:47.058 13:50:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.058 13:50:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 13:50:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:12:47.317 13:50:45 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:12:47.317 13:50:45 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:12:47.317 13:50:45 -- common/autotest_common.sh@1573 -- # bdfs=() 00:12:47.317 13:50:45 -- common/autotest_common.sh@1573 -- # local bdfs 00:12:47.317 13:50:45 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:12:47.317 13:50:45 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:47.317 13:50:45 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:47.317 13:50:45 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:47.317 13:50:45 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:47.317 13:50:45 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:47.317 13:50:45 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:12:47.317 13:50:45 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:47.317 13:50:45 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:12:47.317 13:50:45 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:12:47.317 13:50:45 -- common/autotest_common.sh@1576 -- # device=0x0010 00:12:47.317 13:50:45 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:47.317 13:50:45 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:12:47.317 13:50:45 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:12:47.317 13:50:45 -- common/autotest_common.sh@1576 -- # device=0x0010 00:12:47.317 13:50:45 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:47.317 13:50:45 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:12:47.317 13:50:45 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:12:47.317 13:50:45 -- common/autotest_common.sh@1589 -- # return 0 00:12:47.317 13:50:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:12:47.317 13:50:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:12:47.317 13:50:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:47.317 13:50:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:47.317 13:50:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:12:47.317 13:50:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:47.317 13:50:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 13:50:45 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:47.317 13:50:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:47.317 13:50:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.317 13:50:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 ************************************ 00:12:47.317 START TEST env 00:12:47.317 ************************************ 00:12:47.317 13:50:45 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:47.317 * Looking for test storage... 00:12:47.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:12:47.576 13:50:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:47.576 13:50:45 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:47.576 13:50:45 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.576 13:50:45 env -- common/autotest_common.sh@10 -- # set +x 00:12:47.576 ************************************ 00:12:47.576 START TEST env_memory 00:12:47.576 ************************************ 00:12:47.576 13:50:45 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:47.576 00:12:47.576 00:12:47.576 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.576 http://cunit.sourceforge.net/ 00:12:47.576 00:12:47.576 00:12:47.576 Suite: memory 00:12:47.576 Test: alloc and free memory map ...[2024-05-15 13:50:45.938360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:47.576 passed 00:12:47.576 Test: mem map translation ...[2024-05-15 13:50:45.958578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:47.576 [2024-05-15 13:50:45.958619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:47.576 [2024-05-15 13:50:45.958656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:47.576 [2024-05-15 13:50:45.958664] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:47.576 passed 00:12:47.576 Test: mem map registration ...[2024-05-15 13:50:45.996709] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:12:47.576 [2024-05-15 13:50:45.996772] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:12:47.576 passed 00:12:47.576 Test: mem map adjacent registrations ...passed 00:12:47.576 00:12:47.576 Run Summary: Type Total Ran Passed Failed Inactive 00:12:47.576 suites 1 1 n/a 0 0 00:12:47.576 tests 4 4 4 0 0 00:12:47.576 asserts 152 152 152 0 n/a 00:12:47.576 00:12:47.576 Elapsed time = 0.141 seconds 00:12:47.576 00:12:47.576 real 0m0.156s 00:12:47.576 user 0m0.139s 00:12:47.576 sys 0m0.017s 00:12:47.576 13:50:46 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:47.576 13:50:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:12:47.576 ************************************ 00:12:47.576 END TEST env_memory 00:12:47.576 ************************************ 00:12:47.576 13:50:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:47.576 13:50:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:47.576 13:50:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.576 13:50:46 env -- common/autotest_common.sh@10 -- # set +x 00:12:47.576 ************************************ 00:12:47.576 START TEST env_vtophys 00:12:47.576 ************************************ 00:12:47.576 13:50:46 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:47.576 EAL: lib.eal log level changed from notice to debug 00:12:47.576 EAL: Detected lcore 0 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 1 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 2 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 3 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 4 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 5 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 6 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 7 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 8 as core 0 on socket 0 00:12:47.576 EAL: Detected lcore 9 as core 0 on socket 0 00:12:47.835 EAL: Maximum logical cores by configuration: 128 00:12:47.835 EAL: Detected CPU lcores: 10 00:12:47.835 EAL: Detected NUMA nodes: 1 00:12:47.835 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:12:47.835 EAL: Detected shared linkage of DPDK 00:12:47.835 EAL: No shared files mode enabled, IPC will be disabled 00:12:47.835 EAL: Selected IOVA mode 'PA' 00:12:47.835 EAL: Probing VFIO support... 00:12:47.835 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:47.835 EAL: VFIO modules not loaded, skipping VFIO support... 00:12:47.835 EAL: Ask a virtual area of 0x2e000 bytes 00:12:47.835 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:47.835 EAL: Setting up physically contiguous memory... 00:12:47.835 EAL: Setting maximum number of open files to 524288 00:12:47.835 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:47.835 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:47.835 EAL: Ask a virtual area of 0x61000 bytes 00:12:47.835 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:47.835 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:47.835 EAL: Ask a virtual area of 0x400000000 bytes 00:12:47.835 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:47.835 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:47.835 EAL: Ask a virtual area of 0x61000 bytes 00:12:47.835 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:47.835 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:47.835 EAL: Ask a virtual area of 0x400000000 bytes 00:12:47.835 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:47.835 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:47.835 EAL: Ask a virtual area of 0x61000 bytes 00:12:47.835 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:47.835 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:47.835 EAL: Ask a virtual area of 0x400000000 bytes 00:12:47.835 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:47.835 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:47.835 EAL: Ask a virtual area of 0x61000 bytes 00:12:47.835 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:47.835 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:47.835 EAL: Ask a virtual area of 0x400000000 bytes 00:12:47.835 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:47.835 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:47.835 EAL: Hugepages will be freed exactly as allocated. 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: TSC frequency is ~2490000 KHz 00:12:47.835 EAL: Main lcore 0 is ready (tid=7fabf5cc3a00;cpuset=[0]) 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 0 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 2MB 00:12:47.835 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:47.835 EAL: No PCI address specified using 'addr=' in: bus=pci 00:12:47.835 EAL: Mem event callback 'spdk:(nil)' registered 00:12:47.835 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:12:47.835 00:12:47.835 00:12:47.835 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.835 http://cunit.sourceforge.net/ 00:12:47.835 00:12:47.835 00:12:47.835 Suite: components_suite 00:12:47.835 Test: vtophys_malloc_test ...passed 00:12:47.835 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 4MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 4MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 6MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 6MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 10MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 10MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 18MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 18MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 34MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 34MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 66MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was shrunk by 66MB 00:12:47.835 EAL: Trying to obtain current memory policy. 00:12:47.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:47.835 EAL: Restoring previous memory policy: 4 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.835 EAL: request: mp_malloc_sync 00:12:47.835 EAL: No shared files mode enabled, IPC is disabled 00:12:47.835 EAL: Heap on socket 0 was expanded by 130MB 00:12:47.835 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.093 EAL: request: mp_malloc_sync 00:12:48.093 EAL: No shared files mode enabled, IPC is disabled 00:12:48.093 EAL: Heap on socket 0 was shrunk by 130MB 00:12:48.093 EAL: Trying to obtain current memory policy. 00:12:48.093 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:48.093 EAL: Restoring previous memory policy: 4 00:12:48.093 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.093 EAL: request: mp_malloc_sync 00:12:48.093 EAL: No shared files mode enabled, IPC is disabled 00:12:48.093 EAL: Heap on socket 0 was expanded by 258MB 00:12:48.093 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.093 EAL: request: mp_malloc_sync 00:12:48.093 EAL: No shared files mode enabled, IPC is disabled 00:12:48.093 EAL: Heap on socket 0 was shrunk by 258MB 00:12:48.093 EAL: Trying to obtain current memory policy. 00:12:48.093 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:48.093 EAL: Restoring previous memory policy: 4 00:12:48.093 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.093 EAL: request: mp_malloc_sync 00:12:48.093 EAL: No shared files mode enabled, IPC is disabled 00:12:48.093 EAL: Heap on socket 0 was expanded by 514MB 00:12:48.393 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.393 EAL: request: mp_malloc_sync 00:12:48.393 EAL: No shared files mode enabled, IPC is disabled 00:12:48.393 EAL: Heap on socket 0 was shrunk by 514MB 00:12:48.393 EAL: Trying to obtain current memory policy. 00:12:48.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:48.652 EAL: Restoring previous memory policy: 4 00:12:48.652 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.652 EAL: request: mp_malloc_sync 00:12:48.652 EAL: No shared files mode enabled, IPC is disabled 00:12:48.652 EAL: Heap on socket 0 was expanded by 1026MB 00:12:48.652 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.911 passed 00:12:48.911 00:12:48.911 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.911 suites 1 1 n/a 0 0 00:12:48.911 tests 2 2 2 0 0 00:12:48.911 asserts 5197 5197 5197 0 n/a 00:12:48.911 00:12:48.911 Elapsed time = 1.005 seconds 00:12:48.911 EAL: request: mp_malloc_sync 00:12:48.911 EAL: No shared files mode enabled, IPC is disabled 00:12:48.911 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:48.911 EAL: Calling mem event callback 'spdk:(nil)' 00:12:48.911 EAL: request: mp_malloc_sync 00:12:48.911 EAL: No shared files mode enabled, IPC is disabled 00:12:48.911 EAL: Heap on socket 0 was shrunk by 2MB 00:12:48.911 EAL: No shared files mode enabled, IPC is disabled 00:12:48.911 EAL: No shared files mode enabled, IPC is disabled 00:12:48.911 EAL: No shared files mode enabled, IPC is disabled 00:12:48.911 00:12:48.911 real 0m1.207s 00:12:48.911 user 0m0.646s 00:12:48.911 sys 0m0.434s 00:12:48.911 13:50:47 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.911 13:50:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 ************************************ 00:12:48.911 END TEST env_vtophys 00:12:48.911 ************************************ 00:12:48.911 13:50:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:48.911 13:50:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:48.911 13:50:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.911 13:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 ************************************ 00:12:48.911 START TEST env_pci 00:12:48.911 ************************************ 00:12:48.911 13:50:47 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:48.911 00:12:48.911 00:12:48.911 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.911 http://cunit.sourceforge.net/ 00:12:48.911 00:12:48.911 00:12:48.911 Suite: pci 00:12:48.911 Test: pci_hook ...[2024-05-15 13:50:47.388559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57984 has claimed it 00:12:48.911 passed 00:12:48.911 00:12:48.911 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.911 suites 1 1 n/a 0 0 00:12:48.911 tests 1 1 1 0 0 00:12:48.911 asserts 25 25 25 0 n/a 00:12:48.911 00:12:48.911 Elapsed time = 0.002 seconds 00:12:48.911 EAL: Cannot find device (10000:00:01.0) 00:12:48.911 EAL: Failed to attach device on primary process 00:12:48.911 00:12:48.911 real 0m0.019s 00:12:48.911 user 0m0.006s 00:12:48.911 sys 0m0.012s 00:12:48.911 13:50:47 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.911 13:50:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 ************************************ 00:12:48.911 END TEST env_pci 00:12:48.911 ************************************ 00:12:48.911 13:50:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:48.911 13:50:47 env -- env/env.sh@15 -- # uname 00:12:48.911 13:50:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:48.911 13:50:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:48.911 13:50:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:48.911 13:50:47 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:48.911 13:50:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.911 13:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 ************************************ 00:12:48.911 START TEST env_dpdk_post_init 00:12:48.911 ************************************ 00:12:48.911 13:50:47 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:49.170 EAL: Detected CPU lcores: 10 00:12:49.170 EAL: Detected NUMA nodes: 1 00:12:49.170 EAL: Detected shared linkage of DPDK 00:12:49.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:49.170 EAL: Selected IOVA mode 'PA' 00:12:49.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:49.170 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:12:49.170 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:12:49.170 Starting DPDK initialization... 00:12:49.170 Starting SPDK post initialization... 00:12:49.170 SPDK NVMe probe 00:12:49.170 Attaching to 0000:00:10.0 00:12:49.170 Attaching to 0000:00:11.0 00:12:49.170 Attached to 0000:00:10.0 00:12:49.170 Attached to 0000:00:11.0 00:12:49.170 Cleaning up... 00:12:49.170 00:12:49.170 real 0m0.181s 00:12:49.170 user 0m0.047s 00:12:49.170 sys 0m0.035s 00:12:49.170 13:50:47 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.170 13:50:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:12:49.170 ************************************ 00:12:49.170 END TEST env_dpdk_post_init 00:12:49.170 ************************************ 00:12:49.170 13:50:47 env -- env/env.sh@26 -- # uname 00:12:49.170 13:50:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:12:49.170 13:50:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:49.170 13:50:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:49.170 13:50:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.171 13:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:12:49.171 ************************************ 00:12:49.171 START TEST env_mem_callbacks 00:12:49.171 ************************************ 00:12:49.171 13:50:47 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:49.171 EAL: Detected CPU lcores: 10 00:12:49.171 EAL: Detected NUMA nodes: 1 00:12:49.171 EAL: Detected shared linkage of DPDK 00:12:49.429 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:49.429 EAL: Selected IOVA mode 'PA' 00:12:49.429 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:49.429 00:12:49.429 00:12:49.429 CUnit - A unit testing framework for C - Version 2.1-3 00:12:49.429 http://cunit.sourceforge.net/ 00:12:49.429 00:12:49.429 00:12:49.429 Suite: memory 00:12:49.429 Test: test ... 00:12:49.429 register 0x200000200000 2097152 00:12:49.429 malloc 3145728 00:12:49.429 register 0x200000400000 4194304 00:12:49.429 buf 0x200000500000 len 3145728 PASSED 00:12:49.429 malloc 64 00:12:49.429 buf 0x2000004fff40 len 64 PASSED 00:12:49.429 malloc 4194304 00:12:49.429 register 0x200000800000 6291456 00:12:49.429 buf 0x200000a00000 len 4194304 PASSED 00:12:49.429 free 0x200000500000 3145728 00:12:49.429 free 0x2000004fff40 64 00:12:49.429 unregister 0x200000400000 4194304 PASSED 00:12:49.429 free 0x200000a00000 4194304 00:12:49.429 unregister 0x200000800000 6291456 PASSED 00:12:49.429 malloc 8388608 00:12:49.429 register 0x200000400000 10485760 00:12:49.429 buf 0x200000600000 len 8388608 PASSED 00:12:49.429 free 0x200000600000 8388608 00:12:49.429 unregister 0x200000400000 10485760 PASSED 00:12:49.429 passed 00:12:49.429 00:12:49.429 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.429 suites 1 1 n/a 0 0 00:12:49.429 tests 1 1 1 0 0 00:12:49.429 asserts 15 15 15 0 n/a 00:12:49.429 00:12:49.429 Elapsed time = 0.012 seconds 00:12:49.429 00:12:49.429 real 0m0.147s 00:12:49.429 user 0m0.015s 00:12:49.429 sys 0m0.030s 00:12:49.429 13:50:47 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.429 13:50:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:12:49.429 ************************************ 00:12:49.429 END TEST env_mem_callbacks 00:12:49.429 ************************************ 00:12:49.429 ************************************ 00:12:49.429 END TEST env 00:12:49.429 ************************************ 00:12:49.429 00:12:49.429 real 0m2.145s 00:12:49.429 user 0m1.012s 00:12:49.429 sys 0m0.802s 00:12:49.429 13:50:47 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.429 13:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:12:49.429 13:50:47 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:49.429 13:50:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:49.429 13:50:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.429 13:50:47 -- common/autotest_common.sh@10 -- # set +x 00:12:49.429 ************************************ 00:12:49.429 START TEST rpc 00:12:49.429 ************************************ 00:12:49.429 13:50:47 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:49.687 * Looking for test storage... 00:12:49.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:49.687 13:50:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58099 00:12:49.687 13:50:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:12:49.687 13:50:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:49.687 13:50:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58099 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@827 -- # '[' -z 58099 ']' 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.687 13:50:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.687 [2024-05-15 13:50:48.148868] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:49.687 [2024-05-15 13:50:48.148948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:12:49.947 [2024-05-15 13:50:48.275496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.947 [2024-05-15 13:50:48.380134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:12:49.947 [2024-05-15 13:50:48.380197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58099' to capture a snapshot of events at runtime. 00:12:49.947 [2024-05-15 13:50:48.380206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.947 [2024-05-15 13:50:48.380231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.947 [2024-05-15 13:50:48.380238] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58099 for offline analysis/debug. 00:12:49.947 [2024-05-15 13:50:48.380273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.514 13:50:49 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.514 13:50:49 rpc -- common/autotest_common.sh@860 -- # return 0 00:12:50.514 13:50:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:50.514 13:50:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:50.514 13:50:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:12:50.514 13:50:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:12:50.514 13:50:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:50.514 13:50:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.514 13:50:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.514 ************************************ 00:12:50.514 START TEST rpc_integrity 00:12:50.514 ************************************ 00:12:50.514 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:12:50.514 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.514 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.514 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.514 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.514 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:50.514 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:50.773 { 00:12:50.773 "name": "Malloc0", 00:12:50.773 "aliases": [ 00:12:50.773 "efe3f126-cf03-43e5-a2d7-abb1de9c7fa8" 00:12:50.773 ], 00:12:50.773 "product_name": "Malloc disk", 00:12:50.773 "block_size": 512, 00:12:50.773 "num_blocks": 16384, 00:12:50.773 "uuid": "efe3f126-cf03-43e5-a2d7-abb1de9c7fa8", 00:12:50.773 "assigned_rate_limits": { 00:12:50.773 "rw_ios_per_sec": 0, 00:12:50.773 "rw_mbytes_per_sec": 0, 00:12:50.773 "r_mbytes_per_sec": 0, 00:12:50.773 "w_mbytes_per_sec": 0 00:12:50.773 }, 00:12:50.773 "claimed": false, 00:12:50.773 "zoned": false, 00:12:50.773 "supported_io_types": { 00:12:50.773 "read": true, 00:12:50.773 "write": true, 00:12:50.773 "unmap": true, 00:12:50.773 "write_zeroes": true, 00:12:50.773 "flush": true, 00:12:50.773 "reset": true, 00:12:50.773 "compare": false, 00:12:50.773 "compare_and_write": false, 00:12:50.773 "abort": true, 00:12:50.773 "nvme_admin": false, 00:12:50.773 "nvme_io": false 00:12:50.773 }, 00:12:50.773 "memory_domains": [ 00:12:50.773 { 00:12:50.773 "dma_device_id": "system", 00:12:50.773 "dma_device_type": 1 00:12:50.773 }, 00:12:50.773 { 00:12:50.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.773 "dma_device_type": 2 00:12:50.773 } 00:12:50.773 ], 00:12:50.773 "driver_specific": {} 00:12:50.773 } 00:12:50.773 ]' 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 [2024-05-15 13:50:49.179198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:12:50.773 [2024-05-15 13:50:49.179249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.773 [2024-05-15 13:50:49.179264] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcc24b0 00:12:50.773 [2024-05-15 13:50:49.179273] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.773 [2024-05-15 13:50:49.180643] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.773 [2024-05-15 13:50:49.180674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:50.773 Passthru0 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.773 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.773 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:50.773 { 00:12:50.773 "name": "Malloc0", 00:12:50.773 "aliases": [ 00:12:50.773 "efe3f126-cf03-43e5-a2d7-abb1de9c7fa8" 00:12:50.773 ], 00:12:50.773 "product_name": "Malloc disk", 00:12:50.773 "block_size": 512, 00:12:50.773 "num_blocks": 16384, 00:12:50.773 "uuid": "efe3f126-cf03-43e5-a2d7-abb1de9c7fa8", 00:12:50.773 "assigned_rate_limits": { 00:12:50.773 "rw_ios_per_sec": 0, 00:12:50.773 "rw_mbytes_per_sec": 0, 00:12:50.773 "r_mbytes_per_sec": 0, 00:12:50.773 "w_mbytes_per_sec": 0 00:12:50.773 }, 00:12:50.773 "claimed": true, 00:12:50.773 "claim_type": "exclusive_write", 00:12:50.773 "zoned": false, 00:12:50.773 "supported_io_types": { 00:12:50.773 "read": true, 00:12:50.773 "write": true, 00:12:50.773 "unmap": true, 00:12:50.773 "write_zeroes": true, 00:12:50.773 "flush": true, 00:12:50.773 "reset": true, 00:12:50.773 "compare": false, 00:12:50.773 "compare_and_write": false, 00:12:50.774 "abort": true, 00:12:50.774 "nvme_admin": false, 00:12:50.774 "nvme_io": false 00:12:50.774 }, 00:12:50.774 "memory_domains": [ 00:12:50.774 { 00:12:50.774 "dma_device_id": "system", 00:12:50.774 "dma_device_type": 1 00:12:50.774 }, 00:12:50.774 { 00:12:50.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.774 "dma_device_type": 2 00:12:50.774 } 00:12:50.774 ], 00:12:50.774 "driver_specific": {} 00:12:50.774 }, 00:12:50.774 { 00:12:50.774 "name": "Passthru0", 00:12:50.774 "aliases": [ 00:12:50.774 "40727808-c1f3-5ac0-b970-ce1dc460ab92" 00:12:50.774 ], 00:12:50.774 "product_name": "passthru", 00:12:50.774 "block_size": 512, 00:12:50.774 "num_blocks": 16384, 00:12:50.774 "uuid": "40727808-c1f3-5ac0-b970-ce1dc460ab92", 00:12:50.774 "assigned_rate_limits": { 00:12:50.774 "rw_ios_per_sec": 0, 00:12:50.774 "rw_mbytes_per_sec": 0, 00:12:50.774 "r_mbytes_per_sec": 0, 00:12:50.774 "w_mbytes_per_sec": 0 00:12:50.774 }, 00:12:50.774 "claimed": false, 00:12:50.774 "zoned": false, 00:12:50.774 "supported_io_types": { 00:12:50.774 "read": true, 00:12:50.774 "write": true, 00:12:50.774 "unmap": true, 00:12:50.774 "write_zeroes": true, 00:12:50.774 "flush": true, 00:12:50.774 "reset": true, 00:12:50.774 "compare": false, 00:12:50.774 "compare_and_write": false, 00:12:50.774 "abort": true, 00:12:50.774 "nvme_admin": false, 00:12:50.774 "nvme_io": false 00:12:50.774 }, 00:12:50.774 "memory_domains": [ 00:12:50.774 { 00:12:50.774 "dma_device_id": "system", 00:12:50.774 "dma_device_type": 1 00:12:50.774 }, 00:12:50.774 { 00:12:50.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.774 "dma_device_type": 2 00:12:50.774 } 00:12:50.774 ], 00:12:50.774 "driver_specific": { 00:12:50.774 "passthru": { 00:12:50.774 "name": "Passthru0", 00:12:50.774 "base_bdev_name": "Malloc0" 00:12:50.774 } 00:12:50.774 } 00:12:50.774 } 00:12:50.774 ]' 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:50.774 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:50.774 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:51.033 ************************************ 00:12:51.033 END TEST rpc_integrity 00:12:51.033 ************************************ 00:12:51.033 13:50:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:51.033 00:12:51.033 real 0m0.312s 00:12:51.033 user 0m0.189s 00:12:51.033 sys 0m0.047s 00:12:51.033 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 13:50:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:12:51.033 13:50:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.033 13:50:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.033 13:50:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 ************************************ 00:12:51.033 START TEST rpc_plugins 00:12:51.033 ************************************ 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:12:51.033 { 00:12:51.033 "name": "Malloc1", 00:12:51.033 "aliases": [ 00:12:51.033 "ee85640e-7595-43f4-bc88-64b6b40a1147" 00:12:51.033 ], 00:12:51.033 "product_name": "Malloc disk", 00:12:51.033 "block_size": 4096, 00:12:51.033 "num_blocks": 256, 00:12:51.033 "uuid": "ee85640e-7595-43f4-bc88-64b6b40a1147", 00:12:51.033 "assigned_rate_limits": { 00:12:51.033 "rw_ios_per_sec": 0, 00:12:51.033 "rw_mbytes_per_sec": 0, 00:12:51.033 "r_mbytes_per_sec": 0, 00:12:51.033 "w_mbytes_per_sec": 0 00:12:51.033 }, 00:12:51.033 "claimed": false, 00:12:51.033 "zoned": false, 00:12:51.033 "supported_io_types": { 00:12:51.033 "read": true, 00:12:51.033 "write": true, 00:12:51.033 "unmap": true, 00:12:51.033 "write_zeroes": true, 00:12:51.033 "flush": true, 00:12:51.033 "reset": true, 00:12:51.033 "compare": false, 00:12:51.033 "compare_and_write": false, 00:12:51.033 "abort": true, 00:12:51.033 "nvme_admin": false, 00:12:51.033 "nvme_io": false 00:12:51.033 }, 00:12:51.033 "memory_domains": [ 00:12:51.033 { 00:12:51.033 "dma_device_id": "system", 00:12:51.033 "dma_device_type": 1 00:12:51.033 }, 00:12:51.033 { 00:12:51.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.033 "dma_device_type": 2 00:12:51.033 } 00:12:51.033 ], 00:12:51.033 "driver_specific": {} 00:12:51.033 } 00:12:51.033 ]' 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:12:51.033 13:50:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:12:51.033 00:12:51.033 real 0m0.153s 00:12:51.033 user 0m0.089s 00:12:51.033 sys 0m0.022s 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.033 13:50:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:51.033 ************************************ 00:12:51.033 END TEST rpc_plugins 00:12:51.033 ************************************ 00:12:51.333 13:50:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:12:51.333 13:50:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.333 13:50:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.333 13:50:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.333 ************************************ 00:12:51.333 START TEST rpc_trace_cmd_test 00:12:51.333 ************************************ 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:12:51.333 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58099", 00:12:51.333 "tpoint_group_mask": "0x8", 00:12:51.333 "iscsi_conn": { 00:12:51.333 "mask": "0x2", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "scsi": { 00:12:51.333 "mask": "0x4", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "bdev": { 00:12:51.333 "mask": "0x8", 00:12:51.333 "tpoint_mask": "0xffffffffffffffff" 00:12:51.333 }, 00:12:51.333 "nvmf_rdma": { 00:12:51.333 "mask": "0x10", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "nvmf_tcp": { 00:12:51.333 "mask": "0x20", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "ftl": { 00:12:51.333 "mask": "0x40", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "blobfs": { 00:12:51.333 "mask": "0x80", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "dsa": { 00:12:51.333 "mask": "0x200", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "thread": { 00:12:51.333 "mask": "0x400", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "nvme_pcie": { 00:12:51.333 "mask": "0x800", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "iaa": { 00:12:51.333 "mask": "0x1000", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "nvme_tcp": { 00:12:51.333 "mask": "0x2000", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "bdev_nvme": { 00:12:51.333 "mask": "0x4000", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 }, 00:12:51.333 "sock": { 00:12:51.333 "mask": "0x8000", 00:12:51.333 "tpoint_mask": "0x0" 00:12:51.333 } 00:12:51.333 }' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:12:51.333 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:12:51.598 13:50:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:12:51.598 00:12:51.598 real 0m0.271s 00:12:51.598 user 0m0.215s 00:12:51.598 sys 0m0.040s 00:12:51.598 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.598 ************************************ 00:12:51.598 END TEST rpc_trace_cmd_test 00:12:51.598 13:50:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 ************************************ 00:12:51.598 13:50:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:12:51.598 13:50:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:12:51.598 13:50:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:12:51.598 13:50:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.598 13:50:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.598 13:50:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 ************************************ 00:12:51.598 START TEST rpc_daemon_integrity 00:12:51.598 ************************************ 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:51.598 13:50:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:51.598 { 00:12:51.598 "name": "Malloc2", 00:12:51.598 "aliases": [ 00:12:51.598 "4dfbfb9e-2e9f-4c74-89a9-071d9036cf61" 00:12:51.598 ], 00:12:51.598 "product_name": "Malloc disk", 00:12:51.598 "block_size": 512, 00:12:51.598 "num_blocks": 16384, 00:12:51.598 "uuid": "4dfbfb9e-2e9f-4c74-89a9-071d9036cf61", 00:12:51.598 "assigned_rate_limits": { 00:12:51.598 "rw_ios_per_sec": 0, 00:12:51.598 "rw_mbytes_per_sec": 0, 00:12:51.598 "r_mbytes_per_sec": 0, 00:12:51.598 "w_mbytes_per_sec": 0 00:12:51.598 }, 00:12:51.598 "claimed": false, 00:12:51.598 "zoned": false, 00:12:51.598 "supported_io_types": { 00:12:51.598 "read": true, 00:12:51.598 "write": true, 00:12:51.598 "unmap": true, 00:12:51.598 "write_zeroes": true, 00:12:51.598 "flush": true, 00:12:51.598 "reset": true, 00:12:51.598 "compare": false, 00:12:51.598 "compare_and_write": false, 00:12:51.598 "abort": true, 00:12:51.598 "nvme_admin": false, 00:12:51.598 "nvme_io": false 00:12:51.598 }, 00:12:51.598 "memory_domains": [ 00:12:51.598 { 00:12:51.598 "dma_device_id": "system", 00:12:51.598 "dma_device_type": 1 00:12:51.598 }, 00:12:51.598 { 00:12:51.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.598 "dma_device_type": 2 00:12:51.598 } 00:12:51.598 ], 00:12:51.598 "driver_specific": {} 00:12:51.598 } 00:12:51.598 ]' 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 [2024-05-15 13:50:50.109994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:12:51.598 [2024-05-15 13:50:50.110044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.598 [2024-05-15 13:50:50.110079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd22430 00:12:51.598 [2024-05-15 13:50:50.110088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.598 [2024-05-15 13:50:50.111319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.598 [2024-05-15 13:50:50.111356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:51.598 Passthru0 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:51.598 { 00:12:51.598 "name": "Malloc2", 00:12:51.598 "aliases": [ 00:12:51.598 "4dfbfb9e-2e9f-4c74-89a9-071d9036cf61" 00:12:51.598 ], 00:12:51.598 "product_name": "Malloc disk", 00:12:51.598 "block_size": 512, 00:12:51.598 "num_blocks": 16384, 00:12:51.598 "uuid": "4dfbfb9e-2e9f-4c74-89a9-071d9036cf61", 00:12:51.598 "assigned_rate_limits": { 00:12:51.598 "rw_ios_per_sec": 0, 00:12:51.598 "rw_mbytes_per_sec": 0, 00:12:51.598 "r_mbytes_per_sec": 0, 00:12:51.598 "w_mbytes_per_sec": 0 00:12:51.598 }, 00:12:51.598 "claimed": true, 00:12:51.598 "claim_type": "exclusive_write", 00:12:51.598 "zoned": false, 00:12:51.598 "supported_io_types": { 00:12:51.598 "read": true, 00:12:51.598 "write": true, 00:12:51.598 "unmap": true, 00:12:51.598 "write_zeroes": true, 00:12:51.598 "flush": true, 00:12:51.598 "reset": true, 00:12:51.598 "compare": false, 00:12:51.598 "compare_and_write": false, 00:12:51.598 "abort": true, 00:12:51.598 "nvme_admin": false, 00:12:51.598 "nvme_io": false 00:12:51.598 }, 00:12:51.598 "memory_domains": [ 00:12:51.598 { 00:12:51.598 "dma_device_id": "system", 00:12:51.598 "dma_device_type": 1 00:12:51.598 }, 00:12:51.598 { 00:12:51.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.598 "dma_device_type": 2 00:12:51.598 } 00:12:51.598 ], 00:12:51.598 "driver_specific": {} 00:12:51.598 }, 00:12:51.598 { 00:12:51.598 "name": "Passthru0", 00:12:51.598 "aliases": [ 00:12:51.598 "32a29a19-a5cb-52fe-a7d3-c2cfbac828fd" 00:12:51.598 ], 00:12:51.598 "product_name": "passthru", 00:12:51.598 "block_size": 512, 00:12:51.598 "num_blocks": 16384, 00:12:51.598 "uuid": "32a29a19-a5cb-52fe-a7d3-c2cfbac828fd", 00:12:51.598 "assigned_rate_limits": { 00:12:51.598 "rw_ios_per_sec": 0, 00:12:51.598 "rw_mbytes_per_sec": 0, 00:12:51.598 "r_mbytes_per_sec": 0, 00:12:51.598 "w_mbytes_per_sec": 0 00:12:51.598 }, 00:12:51.598 "claimed": false, 00:12:51.598 "zoned": false, 00:12:51.598 "supported_io_types": { 00:12:51.598 "read": true, 00:12:51.598 "write": true, 00:12:51.598 "unmap": true, 00:12:51.598 "write_zeroes": true, 00:12:51.598 "flush": true, 00:12:51.598 "reset": true, 00:12:51.598 "compare": false, 00:12:51.598 "compare_and_write": false, 00:12:51.598 "abort": true, 00:12:51.598 "nvme_admin": false, 00:12:51.598 "nvme_io": false 00:12:51.598 }, 00:12:51.598 "memory_domains": [ 00:12:51.598 { 00:12:51.598 "dma_device_id": "system", 00:12:51.598 "dma_device_type": 1 00:12:51.598 }, 00:12:51.598 { 00:12:51.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.598 "dma_device_type": 2 00:12:51.598 } 00:12:51.598 ], 00:12:51.598 "driver_specific": { 00:12:51.598 "passthru": { 00:12:51.598 "name": "Passthru0", 00:12:51.598 "base_bdev_name": "Malloc2" 00:12:51.598 } 00:12:51.598 } 00:12:51.598 } 00:12:51.598 ]' 00:12:51.598 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:51.856 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:51.857 00:12:51.857 real 0m0.324s 00:12:51.857 user 0m0.192s 00:12:51.857 sys 0m0.064s 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.857 13:50:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:51.857 ************************************ 00:12:51.857 END TEST rpc_daemon_integrity 00:12:51.857 ************************************ 00:12:51.857 13:50:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:12:51.857 13:50:50 rpc -- rpc/rpc.sh@84 -- # killprocess 58099 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@946 -- # '[' -z 58099 ']' 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@950 -- # kill -0 58099 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@951 -- # uname 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58099 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.857 killing process with pid 58099 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58099' 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@965 -- # kill 58099 00:12:51.857 13:50:50 rpc -- common/autotest_common.sh@970 -- # wait 58099 00:12:52.423 00:12:52.423 real 0m2.768s 00:12:52.423 user 0m3.475s 00:12:52.423 sys 0m0.748s 00:12:52.423 13:50:50 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.423 13:50:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.423 ************************************ 00:12:52.423 END TEST rpc 00:12:52.423 ************************************ 00:12:52.423 13:50:50 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:52.423 13:50:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:52.423 13:50:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.423 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.423 ************************************ 00:12:52.423 START TEST skip_rpc 00:12:52.423 ************************************ 00:12:52.423 13:50:50 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:52.423 * Looking for test storage... 00:12:52.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:52.423 13:50:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:52.423 13:50:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:52.423 13:50:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:12:52.423 13:50:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:52.423 13:50:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.423 13:50:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.423 ************************************ 00:12:52.423 START TEST skip_rpc 00:12:52.423 ************************************ 00:12:52.423 13:50:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:12:52.423 13:50:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58297 00:12:52.423 13:50:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:52.423 13:50:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:12:52.423 13:50:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:12:52.682 [2024-05-15 13:50:51.005107] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:52.682 [2024-05-15 13:50:51.005200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:12:52.682 [2024-05-15 13:50:51.146929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.941 [2024-05-15 13:50:51.251929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58297 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 58297 ']' 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 58297 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.253 13:50:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58297 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.253 killing process with pid 58297 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58297' 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 58297 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 58297 00:12:58.253 00:12:58.253 real 0m5.402s 00:12:58.253 user 0m5.080s 00:12:58.253 sys 0m0.248s 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.253 ************************************ 00:12:58.253 END TEST skip_rpc 00:12:58.253 ************************************ 00:12:58.253 13:50:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.253 13:50:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:12:58.253 13:50:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:58.253 13:50:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.253 13:50:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.253 ************************************ 00:12:58.253 START TEST skip_rpc_with_json 00:12:58.253 ************************************ 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58378 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58378 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 58378 ']' 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:58.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:58.254 13:50:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:58.254 [2024-05-15 13:50:56.471071] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:12:58.254 [2024-05-15 13:50:56.471140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58378 ] 00:12:58.254 [2024-05-15 13:50:56.612378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.254 [2024-05-15 13:50:56.710716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:58.821 [2024-05-15 13:50:57.316722] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:12:58.821 request: 00:12:58.821 { 00:12:58.821 "trtype": "tcp", 00:12:58.821 "method": "nvmf_get_transports", 00:12:58.821 "req_id": 1 00:12:58.821 } 00:12:58.821 Got JSON-RPC error response 00:12:58.821 response: 00:12:58.821 { 00:12:58.821 "code": -19, 00:12:58.821 "message": "No such device" 00:12:58.821 } 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:58.821 [2024-05-15 13:50:57.332779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.821 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:59.081 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.081 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:59.081 { 00:12:59.081 "subsystems": [ 00:12:59.081 { 00:12:59.081 "subsystem": "keyring", 00:12:59.081 "config": [] 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "subsystem": "iobuf", 00:12:59.081 "config": [ 00:12:59.081 { 00:12:59.081 "method": "iobuf_set_options", 00:12:59.081 "params": { 00:12:59.081 "small_pool_count": 8192, 00:12:59.081 "large_pool_count": 1024, 00:12:59.081 "small_bufsize": 8192, 00:12:59.081 "large_bufsize": 135168 00:12:59.081 } 00:12:59.081 } 00:12:59.081 ] 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "subsystem": "sock", 00:12:59.081 "config": [ 00:12:59.081 { 00:12:59.081 "method": "sock_impl_set_options", 00:12:59.081 "params": { 00:12:59.081 "impl_name": "uring", 00:12:59.081 "recv_buf_size": 2097152, 00:12:59.081 "send_buf_size": 2097152, 00:12:59.081 "enable_recv_pipe": true, 00:12:59.081 "enable_quickack": false, 00:12:59.081 "enable_placement_id": 0, 00:12:59.081 "enable_zerocopy_send_server": false, 00:12:59.081 "enable_zerocopy_send_client": false, 00:12:59.081 "zerocopy_threshold": 0, 00:12:59.081 "tls_version": 0, 00:12:59.081 "enable_ktls": false 00:12:59.081 } 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "method": "sock_impl_set_options", 00:12:59.081 "params": { 00:12:59.081 "impl_name": "posix", 00:12:59.081 "recv_buf_size": 2097152, 00:12:59.081 "send_buf_size": 2097152, 00:12:59.081 "enable_recv_pipe": true, 00:12:59.081 "enable_quickack": false, 00:12:59.081 "enable_placement_id": 0, 00:12:59.081 "enable_zerocopy_send_server": true, 00:12:59.081 "enable_zerocopy_send_client": false, 00:12:59.081 "zerocopy_threshold": 0, 00:12:59.081 "tls_version": 0, 00:12:59.081 "enable_ktls": false 00:12:59.081 } 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "method": "sock_impl_set_options", 00:12:59.081 "params": { 00:12:59.081 "impl_name": "ssl", 00:12:59.081 "recv_buf_size": 4096, 00:12:59.081 "send_buf_size": 4096, 00:12:59.081 "enable_recv_pipe": true, 00:12:59.081 "enable_quickack": false, 00:12:59.081 "enable_placement_id": 0, 00:12:59.081 "enable_zerocopy_send_server": true, 00:12:59.081 "enable_zerocopy_send_client": false, 00:12:59.081 "zerocopy_threshold": 0, 00:12:59.081 "tls_version": 0, 00:12:59.081 "enable_ktls": false 00:12:59.081 } 00:12:59.081 } 00:12:59.081 ] 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "subsystem": "vmd", 00:12:59.081 "config": [] 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "subsystem": "accel", 00:12:59.081 "config": [ 00:12:59.081 { 00:12:59.081 "method": "accel_set_options", 00:12:59.081 "params": { 00:12:59.081 "small_cache_size": 128, 00:12:59.081 "large_cache_size": 16, 00:12:59.081 "task_count": 2048, 00:12:59.081 "sequence_count": 2048, 00:12:59.081 "buf_count": 2048 00:12:59.081 } 00:12:59.081 } 00:12:59.081 ] 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "subsystem": "bdev", 00:12:59.081 "config": [ 00:12:59.081 { 00:12:59.081 "method": "bdev_set_options", 00:12:59.081 "params": { 00:12:59.081 "bdev_io_pool_size": 65535, 00:12:59.081 "bdev_io_cache_size": 256, 00:12:59.081 "bdev_auto_examine": true, 00:12:59.081 "iobuf_small_cache_size": 128, 00:12:59.081 "iobuf_large_cache_size": 16 00:12:59.081 } 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "method": "bdev_raid_set_options", 00:12:59.081 "params": { 00:12:59.081 "process_window_size_kb": 1024 00:12:59.081 } 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "method": "bdev_iscsi_set_options", 00:12:59.081 "params": { 00:12:59.081 "timeout_sec": 30 00:12:59.081 } 00:12:59.081 }, 00:12:59.081 { 00:12:59.081 "method": "bdev_nvme_set_options", 00:12:59.081 "params": { 00:12:59.081 "action_on_timeout": "none", 00:12:59.081 "timeout_us": 0, 00:12:59.081 "timeout_admin_us": 0, 00:12:59.081 "keep_alive_timeout_ms": 10000, 00:12:59.081 "arbitration_burst": 0, 00:12:59.081 "low_priority_weight": 0, 00:12:59.081 "medium_priority_weight": 0, 00:12:59.081 "high_priority_weight": 0, 00:12:59.081 "nvme_adminq_poll_period_us": 10000, 00:12:59.081 "nvme_ioq_poll_period_us": 0, 00:12:59.081 "io_queue_requests": 0, 00:12:59.082 "delay_cmd_submit": true, 00:12:59.082 "transport_retry_count": 4, 00:12:59.082 "bdev_retry_count": 3, 00:12:59.082 "transport_ack_timeout": 0, 00:12:59.082 "ctrlr_loss_timeout_sec": 0, 00:12:59.082 "reconnect_delay_sec": 0, 00:12:59.082 "fast_io_fail_timeout_sec": 0, 00:12:59.082 "disable_auto_failback": false, 00:12:59.082 "generate_uuids": false, 00:12:59.082 "transport_tos": 0, 00:12:59.082 "nvme_error_stat": false, 00:12:59.082 "rdma_srq_size": 0, 00:12:59.082 "io_path_stat": false, 00:12:59.082 "allow_accel_sequence": false, 00:12:59.082 "rdma_max_cq_size": 0, 00:12:59.082 "rdma_cm_event_timeout_ms": 0, 00:12:59.082 "dhchap_digests": [ 00:12:59.082 "sha256", 00:12:59.082 "sha384", 00:12:59.082 "sha512" 00:12:59.082 ], 00:12:59.082 "dhchap_dhgroups": [ 00:12:59.082 "null", 00:12:59.082 "ffdhe2048", 00:12:59.082 "ffdhe3072", 00:12:59.082 "ffdhe4096", 00:12:59.082 "ffdhe6144", 00:12:59.082 "ffdhe8192" 00:12:59.082 ] 00:12:59.082 } 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "method": "bdev_nvme_set_hotplug", 00:12:59.082 "params": { 00:12:59.082 "period_us": 100000, 00:12:59.082 "enable": false 00:12:59.082 } 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "method": "bdev_wait_for_examine" 00:12:59.082 } 00:12:59.082 ] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "scsi", 00:12:59.082 "config": null 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "scheduler", 00:12:59.082 "config": [ 00:12:59.082 { 00:12:59.082 "method": "framework_set_scheduler", 00:12:59.082 "params": { 00:12:59.082 "name": "static" 00:12:59.082 } 00:12:59.082 } 00:12:59.082 ] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "vhost_scsi", 00:12:59.082 "config": [] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "vhost_blk", 00:12:59.082 "config": [] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "ublk", 00:12:59.082 "config": [] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "nbd", 00:12:59.082 "config": [] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "nvmf", 00:12:59.082 "config": [ 00:12:59.082 { 00:12:59.082 "method": "nvmf_set_config", 00:12:59.082 "params": { 00:12:59.082 "discovery_filter": "match_any", 00:12:59.082 "admin_cmd_passthru": { 00:12:59.082 "identify_ctrlr": false 00:12:59.082 } 00:12:59.082 } 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "method": "nvmf_set_max_subsystems", 00:12:59.082 "params": { 00:12:59.082 "max_subsystems": 1024 00:12:59.082 } 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "method": "nvmf_set_crdt", 00:12:59.082 "params": { 00:12:59.082 "crdt1": 0, 00:12:59.082 "crdt2": 0, 00:12:59.082 "crdt3": 0 00:12:59.082 } 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "method": "nvmf_create_transport", 00:12:59.082 "params": { 00:12:59.082 "trtype": "TCP", 00:12:59.082 "max_queue_depth": 128, 00:12:59.082 "max_io_qpairs_per_ctrlr": 127, 00:12:59.082 "in_capsule_data_size": 4096, 00:12:59.082 "max_io_size": 131072, 00:12:59.082 "io_unit_size": 131072, 00:12:59.082 "max_aq_depth": 128, 00:12:59.082 "num_shared_buffers": 511, 00:12:59.082 "buf_cache_size": 4294967295, 00:12:59.082 "dif_insert_or_strip": false, 00:12:59.082 "zcopy": false, 00:12:59.082 "c2h_success": true, 00:12:59.082 "sock_priority": 0, 00:12:59.082 "abort_timeout_sec": 1, 00:12:59.082 "ack_timeout": 0, 00:12:59.082 "data_wr_pool_size": 0 00:12:59.082 } 00:12:59.082 } 00:12:59.082 ] 00:12:59.082 }, 00:12:59.082 { 00:12:59.082 "subsystem": "iscsi", 00:12:59.082 "config": [ 00:12:59.082 { 00:12:59.082 "method": "iscsi_set_options", 00:12:59.082 "params": { 00:12:59.082 "node_base": "iqn.2016-06.io.spdk", 00:12:59.082 "max_sessions": 128, 00:12:59.082 "max_connections_per_session": 2, 00:12:59.082 "max_queue_depth": 64, 00:12:59.082 "default_time2wait": 2, 00:12:59.082 "default_time2retain": 20, 00:12:59.082 "first_burst_length": 8192, 00:12:59.082 "immediate_data": true, 00:12:59.082 "allow_duplicated_isid": false, 00:12:59.082 "error_recovery_level": 0, 00:12:59.082 "nop_timeout": 60, 00:12:59.082 "nop_in_interval": 30, 00:12:59.082 "disable_chap": false, 00:12:59.082 "require_chap": false, 00:12:59.082 "mutual_chap": false, 00:12:59.082 "chap_group": 0, 00:12:59.082 "max_large_datain_per_connection": 64, 00:12:59.082 "max_r2t_per_connection": 4, 00:12:59.082 "pdu_pool_size": 36864, 00:12:59.082 "immediate_data_pool_size": 16384, 00:12:59.082 "data_out_pool_size": 2048 00:12:59.082 } 00:12:59.082 } 00:12:59.082 ] 00:12:59.082 } 00:12:59.082 ] 00:12:59.082 } 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58378 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 58378 ']' 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 58378 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58378 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:59.082 killing process with pid 58378 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58378' 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 58378 00:12:59.082 13:50:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 58378 00:12:59.651 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58406 00:12:59.651 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:59.651 13:50:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58406 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 58406 ']' 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 58406 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58406 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.930 killing process with pid 58406 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58406' 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 58406 00:13:04.930 13:51:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 58406 00:13:04.930 13:51:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:04.930 13:51:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:04.930 00:13:04.930 real 0m6.892s 00:13:04.930 user 0m6.609s 00:13:04.930 sys 0m0.565s 00:13:04.930 13:51:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.930 ************************************ 00:13:04.930 END TEST skip_rpc_with_json 00:13:04.930 ************************************ 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 13:51:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 ************************************ 00:13:04.931 START TEST skip_rpc_with_delay 00:13:04.931 ************************************ 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:04.931 [2024-05-15 13:51:03.413795] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:04.931 [2024-05-15 13:51:03.413902] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:04.931 ************************************ 00:13:04.931 END TEST skip_rpc_with_delay 00:13:04.931 ************************************ 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.931 00:13:04.931 real 0m0.066s 00:13:04.931 user 0m0.042s 00:13:04.931 sys 0m0.024s 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.931 13:51:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 13:51:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:04.931 13:51:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:04.931 13:51:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.931 13:51:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.190 ************************************ 00:13:05.190 START TEST exit_on_failed_rpc_init 00:13:05.190 ************************************ 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58515 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58515 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 58515 ']' 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.190 13:51:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:05.190 [2024-05-15 13:51:03.552647] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:05.190 [2024-05-15 13:51:03.552728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58515 ] 00:13:05.190 [2024-05-15 13:51:03.679001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.447 [2024-05-15 13:51:03.783177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:06.024 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:06.024 [2024-05-15 13:51:04.462518] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:06.024 [2024-05-15 13:51:04.462594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58533 ] 00:13:06.282 [2024-05-15 13:51:04.602360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.282 [2024-05-15 13:51:04.733512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.282 [2024-05-15 13:51:04.733597] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:06.282 [2024-05-15 13:51:04.733609] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:06.282 [2024-05-15 13:51:04.733618] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:06.541 13:51:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58515 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 58515 ']' 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 58515 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58515 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:06.542 killing process with pid 58515 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58515' 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 58515 00:13:06.542 13:51:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 58515 00:13:06.800 00:13:06.800 real 0m1.744s 00:13:06.800 user 0m2.002s 00:13:06.800 sys 0m0.392s 00:13:06.800 13:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.800 13:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:06.800 ************************************ 00:13:06.800 END TEST exit_on_failed_rpc_init 00:13:06.800 ************************************ 00:13:06.800 13:51:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:06.800 00:13:06.800 real 0m14.487s 00:13:06.800 user 0m13.878s 00:13:06.800 sys 0m1.475s 00:13:06.800 13:51:05 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.800 ************************************ 00:13:06.800 END TEST skip_rpc 00:13:06.800 ************************************ 00:13:06.800 13:51:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.800 13:51:05 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:06.800 13:51:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:06.800 13:51:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.800 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.800 ************************************ 00:13:06.800 START TEST rpc_client 00:13:06.800 ************************************ 00:13:06.800 13:51:05 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:07.060 * Looking for test storage... 00:13:07.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:07.060 13:51:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:07.060 OK 00:13:07.060 13:51:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:07.060 00:13:07.060 real 0m0.137s 00:13:07.060 user 0m0.059s 00:13:07.060 sys 0m0.088s 00:13:07.060 13:51:05 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:07.060 13:51:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:07.060 ************************************ 00:13:07.060 END TEST rpc_client 00:13:07.060 ************************************ 00:13:07.060 13:51:05 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:07.060 13:51:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:07.060 13:51:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:07.060 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.060 ************************************ 00:13:07.060 START TEST json_config 00:13:07.060 ************************************ 00:13:07.060 13:51:05 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:07.060 13:51:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.319 13:51:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.320 13:51:05 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.320 13:51:05 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.320 13:51:05 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.320 13:51:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.320 13:51:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.320 13:51:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.320 13:51:05 json_config -- paths/export.sh@5 -- # export PATH 00:13:07.320 13:51:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@47 -- # : 0 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.320 13:51:05 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:07.320 INFO: JSON configuration test init 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:07.320 Waiting for target to run... 00:13:07.320 13:51:05 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:13:07.320 13:51:05 json_config -- json_config/common.sh@9 -- # local app=target 00:13:07.320 13:51:05 json_config -- json_config/common.sh@10 -- # shift 00:13:07.320 13:51:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:07.320 13:51:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:07.320 13:51:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:13:07.320 13:51:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:07.320 13:51:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:07.320 13:51:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58651 00:13:07.320 13:51:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:07.320 13:51:05 json_config -- json_config/common.sh@25 -- # waitforlisten 58651 /var/tmp/spdk_tgt.sock 00:13:07.320 13:51:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@827 -- # '[' -z 58651 ']' 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:07.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.320 13:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:07.320 [2024-05-15 13:51:05.727316] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:07.320 [2024-05-15 13:51:05.727608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:13:07.579 [2024-05-15 13:51:06.090652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.837 [2024-05-15 13:51:06.174559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@860 -- # return 0 00:13:08.096 13:51:06 json_config -- json_config/common.sh@26 -- # echo '' 00:13:08.096 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.096 13:51:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:13:08.096 13:51:06 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:13:08.096 13:51:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:13:08.663 13:51:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.663 13:51:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:13:08.663 13:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:13:08.663 13:51:07 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@48 -- # local get_types 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:13:08.922 13:51:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.922 13:51:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@55 -- # return 0 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:13:08.922 13:51:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.922 13:51:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:13:08.922 13:51:07 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:08.922 13:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:09.180 MallocForNvmf0 00:13:09.180 13:51:07 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:09.180 13:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:09.180 MallocForNvmf1 00:13:09.180 13:51:07 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:13:09.180 13:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:13:09.439 [2024-05-15 13:51:07.905329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.439 13:51:07 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:09.439 13:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:09.698 13:51:08 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:09.698 13:51:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:09.956 13:51:08 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:09.956 13:51:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:09.956 13:51:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:09.956 13:51:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:10.215 [2024-05-15 13:51:08.728617] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:10.215 [2024-05-15 13:51:08.728994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:10.215 13:51:08 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:13:10.215 13:51:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.215 13:51:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.473 13:51:08 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:13:10.473 13:51:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.473 13:51:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.473 13:51:08 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:13:10.473 13:51:08 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:10.473 13:51:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:10.731 MallocBdevForConfigChangeCheck 00:13:10.731 13:51:09 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:13:10.731 13:51:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.731 13:51:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.731 13:51:09 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:13:10.732 13:51:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:10.991 INFO: shutting down applications... 00:13:10.991 13:51:09 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:13:10.991 13:51:09 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:13:10.991 13:51:09 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:13:10.991 13:51:09 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:13:10.991 13:51:09 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:13:11.250 Calling clear_iscsi_subsystem 00:13:11.250 Calling clear_nvmf_subsystem 00:13:11.250 Calling clear_nbd_subsystem 00:13:11.250 Calling clear_ublk_subsystem 00:13:11.250 Calling clear_vhost_blk_subsystem 00:13:11.250 Calling clear_vhost_scsi_subsystem 00:13:11.250 Calling clear_bdev_subsystem 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@343 -- # count=100 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:13:11.250 13:51:09 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:13:11.509 13:51:10 json_config -- json_config/json_config.sh@345 -- # break 00:13:11.509 13:51:10 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:13:11.509 13:51:10 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:13:11.509 13:51:10 json_config -- json_config/common.sh@31 -- # local app=target 00:13:11.509 13:51:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:11.509 13:51:10 json_config -- json_config/common.sh@35 -- # [[ -n 58651 ]] 00:13:11.509 13:51:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58651 00:13:11.509 [2024-05-15 13:51:10.058213] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:11.509 13:51:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:11.509 13:51:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:11.509 13:51:10 json_config -- json_config/common.sh@41 -- # kill -0 58651 00:13:11.509 13:51:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:13:12.076 13:51:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:13:12.076 13:51:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:12.076 13:51:10 json_config -- json_config/common.sh@41 -- # kill -0 58651 00:13:12.076 13:51:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:12.076 13:51:10 json_config -- json_config/common.sh@43 -- # break 00:13:12.076 13:51:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:12.076 SPDK target shutdown done 00:13:12.076 13:51:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:12.076 INFO: relaunching applications... 00:13:12.076 13:51:10 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:13:12.076 13:51:10 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:12.076 13:51:10 json_config -- json_config/common.sh@9 -- # local app=target 00:13:12.076 13:51:10 json_config -- json_config/common.sh@10 -- # shift 00:13:12.076 13:51:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:12.076 13:51:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:12.076 13:51:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:13:12.076 13:51:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:12.076 13:51:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:12.076 13:51:10 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:12.076 13:51:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58831 00:13:12.076 Waiting for target to run... 00:13:12.076 13:51:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:12.076 13:51:10 json_config -- json_config/common.sh@25 -- # waitforlisten 58831 /var/tmp/spdk_tgt.sock 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@827 -- # '[' -z 58831 ']' 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:12.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:12.076 13:51:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:12.076 [2024-05-15 13:51:10.629494] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:12.076 [2024-05-15 13:51:10.629581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58831 ] 00:13:12.642 [2024-05-15 13:51:11.004365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.642 [2024-05-15 13:51:11.089505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.901 [2024-05-15 13:51:11.396999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.901 [2024-05-15 13:51:11.428870] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:12.901 [2024-05-15 13:51:11.429083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:13.161 00:13:13.161 INFO: Checking if target configuration is the same... 00:13:13.161 13:51:11 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:13.161 13:51:11 json_config -- common/autotest_common.sh@860 -- # return 0 00:13:13.161 13:51:11 json_config -- json_config/common.sh@26 -- # echo '' 00:13:13.161 13:51:11 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:13:13.161 13:51:11 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:13:13.161 13:51:11 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:13:13.161 13:51:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:13.161 13:51:11 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:13.161 + '[' 2 -ne 2 ']' 00:13:13.161 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:13.161 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:13.161 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:13.161 +++ basename /dev/fd/62 00:13:13.161 ++ mktemp /tmp/62.XXX 00:13:13.161 + tmp_file_1=/tmp/62.dnx 00:13:13.161 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:13.161 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:13.161 + tmp_file_2=/tmp/spdk_tgt_config.json.gZh 00:13:13.161 + ret=0 00:13:13.161 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:13.419 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:13.419 + diff -u /tmp/62.dnx /tmp/spdk_tgt_config.json.gZh 00:13:13.419 INFO: JSON config files are the same 00:13:13.419 + echo 'INFO: JSON config files are the same' 00:13:13.419 + rm /tmp/62.dnx /tmp/spdk_tgt_config.json.gZh 00:13:13.419 + exit 0 00:13:13.419 13:51:11 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:13:13.419 INFO: changing configuration and checking if this can be detected... 00:13:13.419 13:51:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:13:13.419 13:51:11 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:13.419 13:51:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:13.680 13:51:12 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:13.680 13:51:12 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:13:13.680 13:51:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:13.680 + '[' 2 -ne 2 ']' 00:13:13.680 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:13.680 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:13.680 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:13.680 +++ basename /dev/fd/62 00:13:13.680 ++ mktemp /tmp/62.XXX 00:13:13.680 + tmp_file_1=/tmp/62.dOC 00:13:13.680 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:13.680 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:13.680 + tmp_file_2=/tmp/spdk_tgt_config.json.W3s 00:13:13.680 + ret=0 00:13:13.680 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:13.938 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:14.196 + diff -u /tmp/62.dOC /tmp/spdk_tgt_config.json.W3s 00:13:14.196 + ret=1 00:13:14.196 + echo '=== Start of file: /tmp/62.dOC ===' 00:13:14.196 + cat /tmp/62.dOC 00:13:14.196 + echo '=== End of file: /tmp/62.dOC ===' 00:13:14.196 + echo '' 00:13:14.196 + echo '=== Start of file: /tmp/spdk_tgt_config.json.W3s ===' 00:13:14.196 + cat /tmp/spdk_tgt_config.json.W3s 00:13:14.196 + echo '=== End of file: /tmp/spdk_tgt_config.json.W3s ===' 00:13:14.196 + echo '' 00:13:14.196 + rm /tmp/62.dOC /tmp/spdk_tgt_config.json.W3s 00:13:14.196 + exit 1 00:13:14.196 INFO: configuration change detected. 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@317 -- # [[ -n 58831 ]] 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@193 -- # uname -s 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.196 13:51:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 13:51:12 json_config -- json_config/json_config.sh@323 -- # killprocess 58831 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@946 -- # '[' -z 58831 ']' 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@950 -- # kill -0 58831 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@951 -- # uname 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58831 00:13:14.197 killing process with pid 58831 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58831' 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@965 -- # kill 58831 00:13:14.197 [2024-05-15 13:51:12.670207] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:14.197 13:51:12 json_config -- common/autotest_common.sh@970 -- # wait 58831 00:13:14.455 13:51:12 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:14.455 13:51:12 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:13:14.455 13:51:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.455 13:51:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.455 INFO: Success 00:13:14.455 13:51:12 json_config -- json_config/json_config.sh@328 -- # return 0 00:13:14.455 13:51:12 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:13:14.455 ************************************ 00:13:14.455 END TEST json_config 00:13:14.455 ************************************ 00:13:14.455 00:13:14.455 real 0m7.432s 00:13:14.455 user 0m10.081s 00:13:14.455 sys 0m1.782s 00:13:14.455 13:51:12 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:14.455 13:51:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:14.455 13:51:13 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:14.455 13:51:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:14.455 13:51:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:14.455 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.715 ************************************ 00:13:14.715 START TEST json_config_extra_key 00:13:14.715 ************************************ 00:13:14.715 13:51:13 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.715 13:51:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.715 13:51:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.715 13:51:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.715 13:51:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.715 13:51:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.715 13:51:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.715 13:51:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:13:14.715 13:51:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.715 13:51:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:14.715 INFO: launching applications... 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:14.715 13:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58976 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:14.715 Waiting for target to run... 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:14.715 13:51:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58976 /var/tmp/spdk_tgt.sock 00:13:14.715 13:51:13 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 58976 ']' 00:13:14.716 13:51:13 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:14.716 13:51:13 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:14.716 13:51:13 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:14.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:14.716 13:51:13 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:14.716 13:51:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:14.716 [2024-05-15 13:51:13.194801] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:14.716 [2024-05-15 13:51:13.194884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:13:15.286 [2024-05-15 13:51:13.550936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.286 [2024-05-15 13:51:13.630483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.545 00:13:15.545 INFO: shutting down applications... 00:13:15.545 13:51:14 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:15.545 13:51:14 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:13:15.545 13:51:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:13:15.545 13:51:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:15.545 13:51:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:15.545 13:51:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58976 ]] 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58976 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58976 00:13:15.546 13:51:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58976 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:16.112 13:51:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:16.112 SPDK target shutdown done 00:13:16.112 13:51:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:16.112 Success 00:13:16.112 00:13:16.112 real 0m1.522s 00:13:16.112 user 0m1.270s 00:13:16.112 sys 0m0.379s 00:13:16.112 13:51:14 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:16.112 ************************************ 00:13:16.112 END TEST json_config_extra_key 00:13:16.112 ************************************ 00:13:16.112 13:51:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:16.112 13:51:14 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:16.112 13:51:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:16.112 13:51:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.112 13:51:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.112 ************************************ 00:13:16.112 START TEST alias_rpc 00:13:16.112 ************************************ 00:13:16.112 13:51:14 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:16.370 * Looking for test storage... 00:13:16.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:13:16.370 13:51:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:16.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.370 13:51:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59036 00:13:16.370 13:51:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59036 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 59036 ']' 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:16.370 13:51:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.370 13:51:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.370 [2024-05-15 13:51:14.765205] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:16.371 [2024-05-15 13:51:14.765549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59036 ] 00:13:16.371 [2024-05-15 13:51:14.905472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.627 [2024-05-15 13:51:15.013243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.192 13:51:15 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:17.192 13:51:15 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:17.192 13:51:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:13:17.450 13:51:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59036 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 59036 ']' 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 59036 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59036 00:13:17.450 killing process with pid 59036 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59036' 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@965 -- # kill 59036 00:13:17.450 13:51:15 alias_rpc -- common/autotest_common.sh@970 -- # wait 59036 00:13:17.709 ************************************ 00:13:17.709 END TEST alias_rpc 00:13:17.709 ************************************ 00:13:17.709 00:13:17.709 real 0m1.625s 00:13:17.709 user 0m1.725s 00:13:17.709 sys 0m0.409s 00:13:17.709 13:51:16 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.709 13:51:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.969 13:51:16 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:13:17.969 13:51:16 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:17.969 13:51:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:17.969 13:51:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.969 13:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:17.969 ************************************ 00:13:17.969 START TEST spdkcli_tcp 00:13:17.969 ************************************ 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:17.969 * Looking for test storage... 00:13:17.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59112 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:17.969 13:51:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59112 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 59112 ']' 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.969 13:51:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.969 [2024-05-15 13:51:16.482866] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:17.969 [2024-05-15 13:51:16.482952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:13:18.239 [2024-05-15 13:51:16.625835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.239 [2024-05-15 13:51:16.734820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.239 [2024-05-15 13:51:16.734821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.804 13:51:17 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:18.804 13:51:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:13:18.804 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:13:18.804 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59129 00:13:18.804 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:13:19.063 [ 00:13:19.063 "bdev_malloc_delete", 00:13:19.063 "bdev_malloc_create", 00:13:19.063 "bdev_null_resize", 00:13:19.063 "bdev_null_delete", 00:13:19.063 "bdev_null_create", 00:13:19.063 "bdev_nvme_cuse_unregister", 00:13:19.063 "bdev_nvme_cuse_register", 00:13:19.063 "bdev_opal_new_user", 00:13:19.063 "bdev_opal_set_lock_state", 00:13:19.063 "bdev_opal_delete", 00:13:19.063 "bdev_opal_get_info", 00:13:19.063 "bdev_opal_create", 00:13:19.063 "bdev_nvme_opal_revert", 00:13:19.063 "bdev_nvme_opal_init", 00:13:19.063 "bdev_nvme_send_cmd", 00:13:19.063 "bdev_nvme_get_path_iostat", 00:13:19.063 "bdev_nvme_get_mdns_discovery_info", 00:13:19.063 "bdev_nvme_stop_mdns_discovery", 00:13:19.063 "bdev_nvme_start_mdns_discovery", 00:13:19.063 "bdev_nvme_set_multipath_policy", 00:13:19.063 "bdev_nvme_set_preferred_path", 00:13:19.063 "bdev_nvme_get_io_paths", 00:13:19.063 "bdev_nvme_remove_error_injection", 00:13:19.063 "bdev_nvme_add_error_injection", 00:13:19.063 "bdev_nvme_get_discovery_info", 00:13:19.063 "bdev_nvme_stop_discovery", 00:13:19.063 "bdev_nvme_start_discovery", 00:13:19.063 "bdev_nvme_get_controller_health_info", 00:13:19.063 "bdev_nvme_disable_controller", 00:13:19.063 "bdev_nvme_enable_controller", 00:13:19.063 "bdev_nvme_reset_controller", 00:13:19.063 "bdev_nvme_get_transport_statistics", 00:13:19.063 "bdev_nvme_apply_firmware", 00:13:19.063 "bdev_nvme_detach_controller", 00:13:19.063 "bdev_nvme_get_controllers", 00:13:19.063 "bdev_nvme_attach_controller", 00:13:19.063 "bdev_nvme_set_hotplug", 00:13:19.063 "bdev_nvme_set_options", 00:13:19.063 "bdev_passthru_delete", 00:13:19.063 "bdev_passthru_create", 00:13:19.063 "bdev_lvol_check_shallow_copy", 00:13:19.063 "bdev_lvol_start_shallow_copy", 00:13:19.063 "bdev_lvol_grow_lvstore", 00:13:19.063 "bdev_lvol_get_lvols", 00:13:19.063 "bdev_lvol_get_lvstores", 00:13:19.063 "bdev_lvol_delete", 00:13:19.063 "bdev_lvol_set_read_only", 00:13:19.063 "bdev_lvol_resize", 00:13:19.063 "bdev_lvol_decouple_parent", 00:13:19.063 "bdev_lvol_inflate", 00:13:19.063 "bdev_lvol_rename", 00:13:19.063 "bdev_lvol_clone_bdev", 00:13:19.063 "bdev_lvol_clone", 00:13:19.063 "bdev_lvol_snapshot", 00:13:19.063 "bdev_lvol_create", 00:13:19.063 "bdev_lvol_delete_lvstore", 00:13:19.063 "bdev_lvol_rename_lvstore", 00:13:19.063 "bdev_lvol_create_lvstore", 00:13:19.063 "bdev_raid_set_options", 00:13:19.063 "bdev_raid_remove_base_bdev", 00:13:19.063 "bdev_raid_add_base_bdev", 00:13:19.063 "bdev_raid_delete", 00:13:19.063 "bdev_raid_create", 00:13:19.063 "bdev_raid_get_bdevs", 00:13:19.063 "bdev_error_inject_error", 00:13:19.064 "bdev_error_delete", 00:13:19.064 "bdev_error_create", 00:13:19.064 "bdev_split_delete", 00:13:19.064 "bdev_split_create", 00:13:19.064 "bdev_delay_delete", 00:13:19.064 "bdev_delay_create", 00:13:19.064 "bdev_delay_update_latency", 00:13:19.064 "bdev_zone_block_delete", 00:13:19.064 "bdev_zone_block_create", 00:13:19.064 "blobfs_create", 00:13:19.064 "blobfs_detect", 00:13:19.064 "blobfs_set_cache_size", 00:13:19.064 "bdev_aio_delete", 00:13:19.064 "bdev_aio_rescan", 00:13:19.064 "bdev_aio_create", 00:13:19.064 "bdev_ftl_set_property", 00:13:19.064 "bdev_ftl_get_properties", 00:13:19.064 "bdev_ftl_get_stats", 00:13:19.064 "bdev_ftl_unmap", 00:13:19.064 "bdev_ftl_unload", 00:13:19.064 "bdev_ftl_delete", 00:13:19.064 "bdev_ftl_load", 00:13:19.064 "bdev_ftl_create", 00:13:19.064 "bdev_virtio_attach_controller", 00:13:19.064 "bdev_virtio_scsi_get_devices", 00:13:19.064 "bdev_virtio_detach_controller", 00:13:19.064 "bdev_virtio_blk_set_hotplug", 00:13:19.064 "bdev_iscsi_delete", 00:13:19.064 "bdev_iscsi_create", 00:13:19.064 "bdev_iscsi_set_options", 00:13:19.064 "bdev_uring_delete", 00:13:19.064 "bdev_uring_rescan", 00:13:19.064 "bdev_uring_create", 00:13:19.064 "accel_error_inject_error", 00:13:19.064 "ioat_scan_accel_module", 00:13:19.064 "dsa_scan_accel_module", 00:13:19.064 "iaa_scan_accel_module", 00:13:19.064 "keyring_file_remove_key", 00:13:19.064 "keyring_file_add_key", 00:13:19.064 "iscsi_get_histogram", 00:13:19.064 "iscsi_enable_histogram", 00:13:19.064 "iscsi_set_options", 00:13:19.064 "iscsi_get_auth_groups", 00:13:19.064 "iscsi_auth_group_remove_secret", 00:13:19.064 "iscsi_auth_group_add_secret", 00:13:19.064 "iscsi_delete_auth_group", 00:13:19.064 "iscsi_create_auth_group", 00:13:19.064 "iscsi_set_discovery_auth", 00:13:19.064 "iscsi_get_options", 00:13:19.064 "iscsi_target_node_request_logout", 00:13:19.064 "iscsi_target_node_set_redirect", 00:13:19.064 "iscsi_target_node_set_auth", 00:13:19.064 "iscsi_target_node_add_lun", 00:13:19.064 "iscsi_get_stats", 00:13:19.064 "iscsi_get_connections", 00:13:19.064 "iscsi_portal_group_set_auth", 00:13:19.064 "iscsi_start_portal_group", 00:13:19.064 "iscsi_delete_portal_group", 00:13:19.064 "iscsi_create_portal_group", 00:13:19.064 "iscsi_get_portal_groups", 00:13:19.064 "iscsi_delete_target_node", 00:13:19.064 "iscsi_target_node_remove_pg_ig_maps", 00:13:19.064 "iscsi_target_node_add_pg_ig_maps", 00:13:19.064 "iscsi_create_target_node", 00:13:19.064 "iscsi_get_target_nodes", 00:13:19.064 "iscsi_delete_initiator_group", 00:13:19.064 "iscsi_initiator_group_remove_initiators", 00:13:19.064 "iscsi_initiator_group_add_initiators", 00:13:19.064 "iscsi_create_initiator_group", 00:13:19.064 "iscsi_get_initiator_groups", 00:13:19.064 "nvmf_set_crdt", 00:13:19.064 "nvmf_set_config", 00:13:19.064 "nvmf_set_max_subsystems", 00:13:19.064 "nvmf_stop_mdns_prr", 00:13:19.064 "nvmf_publish_mdns_prr", 00:13:19.064 "nvmf_subsystem_get_listeners", 00:13:19.064 "nvmf_subsystem_get_qpairs", 00:13:19.064 "nvmf_subsystem_get_controllers", 00:13:19.064 "nvmf_get_stats", 00:13:19.064 "nvmf_get_transports", 00:13:19.064 "nvmf_create_transport", 00:13:19.064 "nvmf_get_targets", 00:13:19.064 "nvmf_delete_target", 00:13:19.064 "nvmf_create_target", 00:13:19.064 "nvmf_subsystem_allow_any_host", 00:13:19.064 "nvmf_subsystem_remove_host", 00:13:19.064 "nvmf_subsystem_add_host", 00:13:19.064 "nvmf_ns_remove_host", 00:13:19.064 "nvmf_ns_add_host", 00:13:19.064 "nvmf_subsystem_remove_ns", 00:13:19.064 "nvmf_subsystem_add_ns", 00:13:19.064 "nvmf_subsystem_listener_set_ana_state", 00:13:19.064 "nvmf_discovery_get_referrals", 00:13:19.064 "nvmf_discovery_remove_referral", 00:13:19.064 "nvmf_discovery_add_referral", 00:13:19.064 "nvmf_subsystem_remove_listener", 00:13:19.064 "nvmf_subsystem_add_listener", 00:13:19.064 "nvmf_delete_subsystem", 00:13:19.064 "nvmf_create_subsystem", 00:13:19.064 "nvmf_get_subsystems", 00:13:19.064 "env_dpdk_get_mem_stats", 00:13:19.064 "nbd_get_disks", 00:13:19.064 "nbd_stop_disk", 00:13:19.064 "nbd_start_disk", 00:13:19.064 "ublk_recover_disk", 00:13:19.064 "ublk_get_disks", 00:13:19.064 "ublk_stop_disk", 00:13:19.064 "ublk_start_disk", 00:13:19.064 "ublk_destroy_target", 00:13:19.064 "ublk_create_target", 00:13:19.064 "virtio_blk_create_transport", 00:13:19.064 "virtio_blk_get_transports", 00:13:19.064 "vhost_controller_set_coalescing", 00:13:19.064 "vhost_get_controllers", 00:13:19.064 "vhost_delete_controller", 00:13:19.064 "vhost_create_blk_controller", 00:13:19.064 "vhost_scsi_controller_remove_target", 00:13:19.064 "vhost_scsi_controller_add_target", 00:13:19.064 "vhost_start_scsi_controller", 00:13:19.064 "vhost_create_scsi_controller", 00:13:19.064 "thread_set_cpumask", 00:13:19.064 "framework_get_scheduler", 00:13:19.064 "framework_set_scheduler", 00:13:19.064 "framework_get_reactors", 00:13:19.064 "thread_get_io_channels", 00:13:19.064 "thread_get_pollers", 00:13:19.064 "thread_get_stats", 00:13:19.064 "framework_monitor_context_switch", 00:13:19.064 "spdk_kill_instance", 00:13:19.064 "log_enable_timestamps", 00:13:19.064 "log_get_flags", 00:13:19.064 "log_clear_flag", 00:13:19.064 "log_set_flag", 00:13:19.064 "log_get_level", 00:13:19.064 "log_set_level", 00:13:19.064 "log_get_print_level", 00:13:19.064 "log_set_print_level", 00:13:19.064 "framework_enable_cpumask_locks", 00:13:19.064 "framework_disable_cpumask_locks", 00:13:19.064 "framework_wait_init", 00:13:19.064 "framework_start_init", 00:13:19.064 "scsi_get_devices", 00:13:19.064 "bdev_get_histogram", 00:13:19.064 "bdev_enable_histogram", 00:13:19.064 "bdev_set_qos_limit", 00:13:19.064 "bdev_set_qd_sampling_period", 00:13:19.064 "bdev_get_bdevs", 00:13:19.064 "bdev_reset_iostat", 00:13:19.064 "bdev_get_iostat", 00:13:19.064 "bdev_examine", 00:13:19.064 "bdev_wait_for_examine", 00:13:19.064 "bdev_set_options", 00:13:19.064 "notify_get_notifications", 00:13:19.064 "notify_get_types", 00:13:19.064 "accel_get_stats", 00:13:19.064 "accel_set_options", 00:13:19.064 "accel_set_driver", 00:13:19.064 "accel_crypto_key_destroy", 00:13:19.064 "accel_crypto_keys_get", 00:13:19.064 "accel_crypto_key_create", 00:13:19.064 "accel_assign_opc", 00:13:19.064 "accel_get_module_info", 00:13:19.064 "accel_get_opc_assignments", 00:13:19.064 "vmd_rescan", 00:13:19.064 "vmd_remove_device", 00:13:19.064 "vmd_enable", 00:13:19.064 "sock_get_default_impl", 00:13:19.064 "sock_set_default_impl", 00:13:19.064 "sock_impl_set_options", 00:13:19.064 "sock_impl_get_options", 00:13:19.064 "iobuf_get_stats", 00:13:19.064 "iobuf_set_options", 00:13:19.064 "framework_get_pci_devices", 00:13:19.064 "framework_get_config", 00:13:19.064 "framework_get_subsystems", 00:13:19.064 "trace_get_info", 00:13:19.064 "trace_get_tpoint_group_mask", 00:13:19.064 "trace_disable_tpoint_group", 00:13:19.064 "trace_enable_tpoint_group", 00:13:19.064 "trace_clear_tpoint_mask", 00:13:19.064 "trace_set_tpoint_mask", 00:13:19.064 "keyring_get_keys", 00:13:19.064 "spdk_get_version", 00:13:19.064 "rpc_get_methods" 00:13:19.064 ] 00:13:19.064 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.064 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:19.064 13:51:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59112 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 59112 ']' 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 59112 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:19.064 13:51:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59112 00:13:19.323 killing process with pid 59112 00:13:19.323 13:51:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:19.323 13:51:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:19.323 13:51:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59112' 00:13:19.323 13:51:17 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 59112 00:13:19.323 13:51:17 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 59112 00:13:19.581 ************************************ 00:13:19.581 END TEST spdkcli_tcp 00:13:19.581 ************************************ 00:13:19.581 00:13:19.581 real 0m1.707s 00:13:19.581 user 0m2.941s 00:13:19.581 sys 0m0.507s 00:13:19.581 13:51:18 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:19.581 13:51:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.581 13:51:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:19.581 13:51:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:19.581 13:51:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.581 13:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.581 ************************************ 00:13:19.581 START TEST dpdk_mem_utility 00:13:19.581 ************************************ 00:13:19.581 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:19.839 * Looking for test storage... 00:13:19.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:13:19.839 13:51:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:19.839 13:51:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.839 13:51:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59197 00:13:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.839 13:51:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59197 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 59197 ']' 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:19.839 13:51:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 [2024-05-15 13:51:18.243184] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:19.839 [2024-05-15 13:51:18.243264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:13:19.839 [2024-05-15 13:51:18.377679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.117 [2024-05-15 13:51:18.481487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.682 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:20.682 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:13:20.682 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:20.682 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:20.682 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.682 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:20.682 { 00:13:20.682 "filename": "/tmp/spdk_mem_dump.txt" 00:13:20.682 } 00:13:20.682 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.682 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:20.682 DPDK memory size 814.000000 MiB in 1 heap(s) 00:13:20.682 1 heaps totaling size 814.000000 MiB 00:13:20.682 size: 814.000000 MiB heap id: 0 00:13:20.682 end heaps---------- 00:13:20.682 8 mempools totaling size 598.116089 MiB 00:13:20.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:20.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:20.682 size: 84.521057 MiB name: bdev_io_59197 00:13:20.682 size: 51.011292 MiB name: evtpool_59197 00:13:20.682 size: 50.003479 MiB name: msgpool_59197 00:13:20.682 size: 21.763794 MiB name: PDU_Pool 00:13:20.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:20.682 size: 0.026123 MiB name: Session_Pool 00:13:20.682 end mempools------- 00:13:20.682 6 memzones totaling size 4.142822 MiB 00:13:20.682 size: 1.000366 MiB name: RG_ring_0_59197 00:13:20.682 size: 1.000366 MiB name: RG_ring_1_59197 00:13:20.682 size: 1.000366 MiB name: RG_ring_4_59197 00:13:20.682 size: 1.000366 MiB name: RG_ring_5_59197 00:13:20.682 size: 0.125366 MiB name: RG_ring_2_59197 00:13:20.682 size: 0.015991 MiB name: RG_ring_3_59197 00:13:20.682 end memzones------- 00:13:20.682 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:13:20.941 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:13:20.941 list of free elements. size: 12.472290 MiB 00:13:20.941 element at address: 0x200000400000 with size: 1.999512 MiB 00:13:20.941 element at address: 0x200018e00000 with size: 0.999878 MiB 00:13:20.941 element at address: 0x200019000000 with size: 0.999878 MiB 00:13:20.941 element at address: 0x200003e00000 with size: 0.996277 MiB 00:13:20.941 element at address: 0x200031c00000 with size: 0.994446 MiB 00:13:20.941 element at address: 0x200013800000 with size: 0.978699 MiB 00:13:20.941 element at address: 0x200007000000 with size: 0.959839 MiB 00:13:20.941 element at address: 0x200019200000 with size: 0.936584 MiB 00:13:20.941 element at address: 0x200000200000 with size: 0.833191 MiB 00:13:20.941 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:13:20.941 element at address: 0x20000b200000 with size: 0.489807 MiB 00:13:20.941 element at address: 0x200000800000 with size: 0.486145 MiB 00:13:20.941 element at address: 0x200019400000 with size: 0.485657 MiB 00:13:20.941 element at address: 0x200027e00000 with size: 0.395752 MiB 00:13:20.941 element at address: 0x200003a00000 with size: 0.347839 MiB 00:13:20.941 list of standard malloc elements. size: 199.265137 MiB 00:13:20.941 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:13:20.941 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:13:20.941 element at address: 0x200018efff80 with size: 1.000122 MiB 00:13:20.941 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:13:20.941 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:13:20.941 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:13:20.941 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:13:20.941 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:13:20.941 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:13:20.941 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087c740 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087c800 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087c980 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59180 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59240 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59300 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59480 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59540 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59600 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59780 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59840 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59900 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:13:20.941 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003adb300 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003adb500 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003affa80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003affb40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:13:20.942 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e65500 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:13:20.942 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:13:20.942 list of memzone associated elements. size: 602.262573 MiB 00:13:20.942 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:13:20.942 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:20.942 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:13:20.942 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:20.942 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:13:20.942 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59197_0 00:13:20.942 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:13:20.942 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59197_0 00:13:20.942 element at address: 0x200003fff380 with size: 48.003052 MiB 00:13:20.942 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59197_0 00:13:20.942 element at address: 0x2000195be940 with size: 20.255554 MiB 00:13:20.942 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:20.942 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:13:20.942 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:20.942 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:13:20.942 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59197 00:13:20.942 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:13:20.942 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59197 00:13:20.942 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:13:20.942 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59197 00:13:20.942 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:13:20.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:20.942 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:13:20.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:20.942 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:13:20.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:20.942 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:13:20.942 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:20.942 element at address: 0x200003eff180 with size: 1.000488 MiB 00:13:20.942 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59197 00:13:20.942 element at address: 0x200003affc00 with size: 1.000488 MiB 00:13:20.943 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59197 00:13:20.943 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:13:20.943 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59197 00:13:20.943 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:13:20.943 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59197 00:13:20.943 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:13:20.943 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59197 00:13:20.943 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:13:20.943 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:20.943 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:13:20.943 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:20.943 element at address: 0x20001947c540 with size: 0.250488 MiB 00:13:20.943 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:20.943 element at address: 0x200003adf880 with size: 0.125488 MiB 00:13:20.943 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59197 00:13:20.943 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:13:20.943 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:20.943 element at address: 0x200027e65680 with size: 0.023743 MiB 00:13:20.943 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:20.943 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:13:20.943 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59197 00:13:20.943 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:13:20.943 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:20.943 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:13:20.943 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59197 00:13:20.943 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:13:20.943 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59197 00:13:20.943 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:13:20.943 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:20.943 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:20.943 13:51:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59197 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 59197 ']' 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 59197 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59197 00:13:20.943 killing process with pid 59197 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59197' 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 59197 00:13:20.943 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 59197 00:13:21.201 00:13:21.201 real 0m1.584s 00:13:21.201 user 0m1.663s 00:13:21.201 sys 0m0.410s 00:13:21.201 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.201 13:51:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:21.201 ************************************ 00:13:21.201 END TEST dpdk_mem_utility 00:13:21.201 ************************************ 00:13:21.201 13:51:19 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:21.201 13:51:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:21.201 13:51:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.201 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:13:21.201 ************************************ 00:13:21.201 START TEST event 00:13:21.201 ************************************ 00:13:21.201 13:51:19 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:21.460 * Looking for test storage... 00:13:21.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:21.460 13:51:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:21.460 13:51:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:13:21.460 13:51:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:21.460 13:51:19 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:21.460 13:51:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.460 13:51:19 event -- common/autotest_common.sh@10 -- # set +x 00:13:21.460 ************************************ 00:13:21.460 START TEST event_perf 00:13:21.460 ************************************ 00:13:21.460 13:51:19 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:21.460 Running I/O for 1 seconds...[2024-05-15 13:51:19.899166] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:21.460 [2024-05-15 13:51:19.899959] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:13:21.719 [2024-05-15 13:51:20.046616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.719 [2024-05-15 13:51:20.150234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.719 [2024-05-15 13:51:20.150366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.719 [2024-05-15 13:51:20.150368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.719 Running I/O for 1 seconds...[2024-05-15 13:51:20.150315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.106 00:13:23.106 lcore 0: 186305 00:13:23.106 lcore 1: 186304 00:13:23.106 lcore 2: 186305 00:13:23.106 lcore 3: 186306 00:13:23.106 done. 00:13:23.106 00:13:23.106 real 0m1.378s 00:13:23.106 user 0m4.187s 00:13:23.106 sys 0m0.063s 00:13:23.106 13:51:21 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.106 13:51:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 ************************************ 00:13:23.106 END TEST event_perf 00:13:23.106 ************************************ 00:13:23.106 13:51:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:23.106 13:51:21 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:23.106 13:51:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.106 13:51:21 event -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 ************************************ 00:13:23.106 START TEST event_reactor 00:13:23.106 ************************************ 00:13:23.106 13:51:21 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:23.106 [2024-05-15 13:51:21.342118] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:23.106 [2024-05-15 13:51:21.342212] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:13:23.106 [2024-05-15 13:51:21.485347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.106 [2024-05-15 13:51:21.588859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.480 test_start 00:13:24.480 oneshot 00:13:24.480 tick 100 00:13:24.480 tick 100 00:13:24.480 tick 250 00:13:24.480 tick 100 00:13:24.480 tick 100 00:13:24.480 tick 100 00:13:24.480 tick 250 00:13:24.480 tick 500 00:13:24.480 tick 100 00:13:24.480 tick 100 00:13:24.480 tick 250 00:13:24.480 tick 100 00:13:24.480 tick 100 00:13:24.480 test_end 00:13:24.480 00:13:24.480 real 0m1.373s 00:13:24.480 user 0m1.216s 00:13:24.480 sys 0m0.049s 00:13:24.480 ************************************ 00:13:24.480 END TEST event_reactor 00:13:24.480 ************************************ 00:13:24.480 13:51:22 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.480 13:51:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 13:51:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:24.480 13:51:22 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:24.480 13:51:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.480 13:51:22 event -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 ************************************ 00:13:24.480 START TEST event_reactor_perf 00:13:24.480 ************************************ 00:13:24.480 13:51:22 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:24.480 [2024-05-15 13:51:22.780345] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:24.480 [2024-05-15 13:51:22.780474] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:13:24.480 [2024-05-15 13:51:22.909657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.480 [2024-05-15 13:51:23.015201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.852 test_start 00:13:25.852 test_end 00:13:25.852 Performance: 472432 events per second 00:13:25.852 00:13:25.852 real 0m1.364s 00:13:25.852 user 0m1.198s 00:13:25.852 sys 0m0.059s 00:13:25.852 13:51:24 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.852 ************************************ 00:13:25.852 END TEST event_reactor_perf 00:13:25.852 ************************************ 00:13:25.852 13:51:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:13:25.852 13:51:24 event -- event/event.sh@49 -- # uname -s 00:13:25.852 13:51:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:13:25.852 13:51:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:25.852 13:51:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:25.852 13:51:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.852 13:51:24 event -- common/autotest_common.sh@10 -- # set +x 00:13:25.852 ************************************ 00:13:25.852 START TEST event_scheduler 00:13:25.852 ************************************ 00:13:25.852 13:51:24 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:25.852 * Looking for test storage... 00:13:25.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:13:25.852 13:51:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:13:25.852 13:51:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59404 00:13:25.852 13:51:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:13:25.852 13:51:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:13:25.852 13:51:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59404 00:13:25.852 13:51:24 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 59404 ']' 00:13:25.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.852 13:51:24 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.852 13:51:24 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:25.852 13:51:24 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.853 13:51:24 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:25.853 13:51:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 [2024-05-15 13:51:24.373049] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:25.853 [2024-05-15 13:51:24.373148] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59404 ] 00:13:26.112 [2024-05-15 13:51:24.512305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.112 [2024-05-15 13:51:24.622486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.112 [2024-05-15 13:51:24.622682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.112 [2024-05-15 13:51:24.622797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.112 [2024-05-15 13:51:24.622798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:13:27.048 13:51:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 POWER: Env isn't set yet! 00:13:27.048 POWER: Attempting to initialise ACPI cpufreq power management... 00:13:27.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:27.048 POWER: Cannot set governor of lcore 0 to userspace 00:13:27.048 POWER: Attempting to initialise PSTAT power management... 00:13:27.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:27.048 POWER: Cannot set governor of lcore 0 to performance 00:13:27.048 POWER: Attempting to initialise AMD PSTATE power management... 00:13:27.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:27.048 POWER: Cannot set governor of lcore 0 to userspace 00:13:27.048 POWER: Attempting to initialise CPPC power management... 00:13:27.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:27.048 POWER: Cannot set governor of lcore 0 to userspace 00:13:27.048 POWER: Attempting to initialise VM power management... 00:13:27.048 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:13:27.048 POWER: Unable to set Power Management Environment for lcore 0 00:13:27.048 [2024-05-15 13:51:25.320188] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:13:27.048 [2024-05-15 13:51:25.320203] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:13:27.048 [2024-05-15 13:51:25.320212] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 [2024-05-15 13:51:25.402302] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 ************************************ 00:13:27.048 START TEST scheduler_create_thread 00:13:27.048 ************************************ 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 2 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 3 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 4 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 5 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 6 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 7 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 8 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 9 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.048 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.048 10 00:13:27.049 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.049 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:13:27.049 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.049 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:27.617 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.617 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:13:27.617 13:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:13:27.617 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.617 13:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.554 13:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.554 13:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:13:28.554 13:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.554 13:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:29.491 13:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.491 13:51:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:13:29.491 13:51:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:13:29.491 13:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.491 13:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:30.429 ************************************ 00:13:30.429 END TEST scheduler_create_thread 00:13:30.429 ************************************ 00:13:30.429 13:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.429 00:13:30.429 real 0m3.222s 00:13:30.429 user 0m0.029s 00:13:30.429 sys 0m0.005s 00:13:30.429 13:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.429 13:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:30.429 13:51:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:30.429 13:51:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59404 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 59404 ']' 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 59404 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59404 00:13:30.429 killing process with pid 59404 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59404' 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 59404 00:13:30.429 13:51:28 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 59404 00:13:30.694 [2024-05-15 13:51:29.021515] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:13:30.966 ************************************ 00:13:30.966 END TEST event_scheduler 00:13:30.966 ************************************ 00:13:30.966 00:13:30.966 real 0m5.116s 00:13:30.966 user 0m10.273s 00:13:30.966 sys 0m0.407s 00:13:30.966 13:51:29 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.966 13:51:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:30.966 13:51:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:13:30.966 13:51:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:13:30.966 13:51:29 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:30.966 13:51:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.966 13:51:29 event -- common/autotest_common.sh@10 -- # set +x 00:13:30.966 ************************************ 00:13:30.966 START TEST app_repeat 00:13:30.966 ************************************ 00:13:30.966 13:51:29 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:13:30.966 13:51:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.966 13:51:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:30.966 13:51:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:13:30.966 13:51:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:30.966 13:51:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59504 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:13:30.967 Process app_repeat pid: 59504 00:13:30.967 spdk_app_start Round 0 00:13:30.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59504' 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:13:30.967 13:51:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59504 /var/tmp/spdk-nbd.sock 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 59504 ']' 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.967 13:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:30.967 [2024-05-15 13:51:29.425558] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:30.967 [2024-05-15 13:51:29.425821] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59504 ] 00:13:31.225 [2024-05-15 13:51:29.566999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.225 [2024-05-15 13:51:29.663006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.225 [2024-05-15 13:51:29.663008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.793 13:51:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.793 13:51:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:13:31.793 13:51:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:32.052 Malloc0 00:13:32.052 13:51:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:32.311 Malloc1 00:13:32.311 13:51:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:32.311 /dev/nbd0 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:32.311 1+0 records in 00:13:32.311 1+0 records out 00:13:32.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575096 s, 7.1 MB/s 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:32.311 13:51:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.311 13:51:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:32.570 /dev/nbd1 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:32.570 1+0 records in 00:13:32.570 1+0 records out 00:13:32.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057589 s, 7.1 MB/s 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:32.570 13:51:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.570 13:51:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:32.829 13:51:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:32.829 { 00:13:32.829 "nbd_device": "/dev/nbd0", 00:13:32.829 "bdev_name": "Malloc0" 00:13:32.829 }, 00:13:32.829 { 00:13:32.829 "nbd_device": "/dev/nbd1", 00:13:32.829 "bdev_name": "Malloc1" 00:13:32.829 } 00:13:32.829 ]' 00:13:32.829 13:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:32.829 { 00:13:32.829 "nbd_device": "/dev/nbd0", 00:13:32.829 "bdev_name": "Malloc0" 00:13:32.829 }, 00:13:32.830 { 00:13:32.830 "nbd_device": "/dev/nbd1", 00:13:32.830 "bdev_name": "Malloc1" 00:13:32.830 } 00:13:32.830 ]' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:32.830 /dev/nbd1' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:32.830 /dev/nbd1' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:32.830 256+0 records in 00:13:32.830 256+0 records out 00:13:32.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126636 s, 82.8 MB/s 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:32.830 13:51:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:33.089 256+0 records in 00:13:33.089 256+0 records out 00:13:33.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251341 s, 41.7 MB/s 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:33.089 256+0 records in 00:13:33.089 256+0 records out 00:13:33.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026034 s, 40.3 MB/s 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:33.089 13:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.349 13:51:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:33.608 13:51:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:33.608 13:51:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:33.867 13:51:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:34.127 [2024-05-15 13:51:32.515777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.127 [2024-05-15 13:51:32.606610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.127 [2024-05-15 13:51:32.606612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.127 [2024-05-15 13:51:32.654645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:34.127 [2024-05-15 13:51:32.654703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:37.415 spdk_app_start Round 1 00:13:37.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59504 /var/tmp/spdk-nbd.sock 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 59504 ']' 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.415 13:51:35 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:37.415 Malloc0 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:37.415 Malloc1 00:13:37.415 13:51:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.415 13:51:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:37.675 /dev/nbd0 00:13:37.675 13:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.675 13:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:37.675 1+0 records in 00:13:37.675 1+0 records out 00:13:37.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187135 s, 21.9 MB/s 00:13:37.675 13:51:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:37.676 13:51:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:37.676 13:51:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:37.676 13:51:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:37.676 13:51:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:37.676 13:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.676 13:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.676 13:51:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:37.935 /dev/nbd1 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:37.935 1+0 records in 00:13:37.935 1+0 records out 00:13:37.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190954 s, 21.5 MB/s 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:37.935 13:51:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.935 13:51:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:38.195 { 00:13:38.195 "nbd_device": "/dev/nbd0", 00:13:38.195 "bdev_name": "Malloc0" 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "nbd_device": "/dev/nbd1", 00:13:38.195 "bdev_name": "Malloc1" 00:13:38.195 } 00:13:38.195 ]' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:38.195 { 00:13:38.195 "nbd_device": "/dev/nbd0", 00:13:38.195 "bdev_name": "Malloc0" 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "nbd_device": "/dev/nbd1", 00:13:38.195 "bdev_name": "Malloc1" 00:13:38.195 } 00:13:38.195 ]' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:38.195 /dev/nbd1' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:38.195 /dev/nbd1' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:38.195 256+0 records in 00:13:38.195 256+0 records out 00:13:38.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652143 s, 161 MB/s 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:38.195 256+0 records in 00:13:38.195 256+0 records out 00:13:38.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200232 s, 52.4 MB/s 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:38.195 256+0 records in 00:13:38.195 256+0 records out 00:13:38.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02082 s, 50.4 MB/s 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.195 13:51:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.455 13:51:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.714 13:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:38.973 13:51:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:38.973 13:51:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:39.231 13:51:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:39.491 [2024-05-15 13:51:37.868931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.491 [2024-05-15 13:51:37.955501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.491 [2024-05-15 13:51:37.955502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.491 [2024-05-15 13:51:37.998870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:39.491 [2024-05-15 13:51:37.998909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:42.780 spdk_app_start Round 2 00:13:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:42.780 13:51:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:42.780 13:51:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:13:42.780 13:51:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59504 /var/tmp/spdk-nbd.sock 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 59504 ']' 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.780 13:51:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:13:42.780 13:51:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:42.780 Malloc0 00:13:42.780 13:51:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:42.780 Malloc1 00:13:43.038 13:51:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:43.038 13:51:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:43.039 /dev/nbd0 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:43.039 1+0 records in 00:13:43.039 1+0 records out 00:13:43.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309598 s, 13.2 MB/s 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:43.039 13:51:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.039 13:51:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:43.298 /dev/nbd1 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:43.298 1+0 records in 00:13:43.298 1+0 records out 00:13:43.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307668 s, 13.3 MB/s 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:43.298 13:51:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.298 13:51:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:43.558 { 00:13:43.558 "nbd_device": "/dev/nbd0", 00:13:43.558 "bdev_name": "Malloc0" 00:13:43.558 }, 00:13:43.558 { 00:13:43.558 "nbd_device": "/dev/nbd1", 00:13:43.558 "bdev_name": "Malloc1" 00:13:43.558 } 00:13:43.558 ]' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:43.558 { 00:13:43.558 "nbd_device": "/dev/nbd0", 00:13:43.558 "bdev_name": "Malloc0" 00:13:43.558 }, 00:13:43.558 { 00:13:43.558 "nbd_device": "/dev/nbd1", 00:13:43.558 "bdev_name": "Malloc1" 00:13:43.558 } 00:13:43.558 ]' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:43.558 /dev/nbd1' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:43.558 /dev/nbd1' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:43.558 256+0 records in 00:13:43.558 256+0 records out 00:13:43.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01163 s, 90.2 MB/s 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:43.558 256+0 records in 00:13:43.558 256+0 records out 00:13:43.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024624 s, 42.6 MB/s 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:43.558 13:51:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:43.852 256+0 records in 00:13:43.852 256+0 records out 00:13:43.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211581 s, 49.6 MB/s 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.852 13:51:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.111 13:51:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:44.470 13:51:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:44.470 13:51:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:44.728 13:51:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:44.728 [2024-05-15 13:51:43.267447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.987 [2024-05-15 13:51:43.357905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.987 [2024-05-15 13:51:43.357906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.987 [2024-05-15 13:51:43.400095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:44.987 [2024-05-15 13:51:43.400163] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:48.272 13:51:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59504 /var/tmp/spdk-nbd.sock 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 59504 ']' 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:48.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:13:48.272 13:51:46 event.app_repeat -- event/event.sh@39 -- # killprocess 59504 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 59504 ']' 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 59504 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59504 00:13:48.272 killing process with pid 59504 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59504' 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@965 -- # kill 59504 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@970 -- # wait 59504 00:13:48.272 spdk_app_start is called in Round 0. 00:13:48.272 Shutdown signal received, stop current app iteration 00:13:48.272 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:13:48.272 spdk_app_start is called in Round 1. 00:13:48.272 Shutdown signal received, stop current app iteration 00:13:48.272 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:13:48.272 spdk_app_start is called in Round 2. 00:13:48.272 Shutdown signal received, stop current app iteration 00:13:48.272 Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 reinitialization... 00:13:48.272 spdk_app_start is called in Round 3. 00:13:48.272 Shutdown signal received, stop current app iteration 00:13:48.272 13:51:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:48.272 13:51:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:13:48.272 00:13:48.272 real 0m17.145s 00:13:48.272 user 0m37.068s 00:13:48.272 sys 0m2.887s 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.272 ************************************ 00:13:48.272 13:51:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:48.272 END TEST app_repeat 00:13:48.272 ************************************ 00:13:48.272 13:51:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:48.272 13:51:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:48.272 13:51:46 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:48.272 13:51:46 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.272 13:51:46 event -- common/autotest_common.sh@10 -- # set +x 00:13:48.272 ************************************ 00:13:48.272 START TEST cpu_locks 00:13:48.272 ************************************ 00:13:48.272 13:51:46 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:48.273 * Looking for test storage... 00:13:48.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:48.273 13:51:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:48.273 13:51:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:48.273 13:51:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:48.273 13:51:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:48.273 13:51:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:48.273 13:51:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.273 13:51:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 ************************************ 00:13:48.273 START TEST default_locks 00:13:48.273 ************************************ 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59920 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59920 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 59920 ']' 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.273 13:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 [2024-05-15 13:51:46.789155] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:48.273 [2024-05-15 13:51:46.789388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:13:48.532 [2024-05-15 13:51:46.928025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.532 [2024-05-15 13:51:47.027754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.097 13:51:47 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.097 13:51:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:13:49.097 13:51:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59920 00:13:49.097 13:51:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59920 00:13:49.097 13:51:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59920 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 59920 ']' 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 59920 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59920 00:13:49.666 killing process with pid 59920 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59920' 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 59920 00:13:49.666 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 59920 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59920 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 59920 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 59920 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 59920 ']' 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.988 ERROR: process (pid: 59920) is no longer running 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:49.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (59920) - No such process 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:13:49.988 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:50.252 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:13:50.252 13:51:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:50.252 00:13:50.252 real 0m1.811s 00:13:50.252 user 0m1.891s 00:13:50.252 sys 0m0.552s 00:13:50.252 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:50.252 ************************************ 00:13:50.252 END TEST default_locks 00:13:50.252 ************************************ 00:13:50.252 13:51:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:50.252 13:51:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:50.252 13:51:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:50.252 13:51:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:50.252 13:51:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:50.252 ************************************ 00:13:50.252 START TEST default_locks_via_rpc 00:13:50.252 ************************************ 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59972 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59972 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 59972 ']' 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.252 13:51:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.252 [2024-05-15 13:51:48.668283] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:50.252 [2024-05-15 13:51:48.668363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59972 ] 00:13:50.252 [2024-05-15 13:51:48.794418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.510 [2024-05-15 13:51:48.894387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59972 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59972 00:13:51.075 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59972 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 59972 ']' 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 59972 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59972 00:13:51.642 killing process with pid 59972 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59972' 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 59972 00:13:51.642 13:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 59972 00:13:51.902 00:13:51.902 real 0m1.717s 00:13:51.902 user 0m1.816s 00:13:51.902 sys 0m0.503s 00:13:51.902 ************************************ 00:13:51.902 END TEST default_locks_via_rpc 00:13:51.902 ************************************ 00:13:51.902 13:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.902 13:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.902 13:51:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:51.902 13:51:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:51.902 13:51:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.902 13:51:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:51.902 ************************************ 00:13:51.902 START TEST non_locking_app_on_locked_coremask 00:13:51.902 ************************************ 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60018 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60018 /var/tmp/spdk.sock 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60018 ']' 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.902 13:51:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:51.902 [2024-05-15 13:51:50.446497] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:51.902 [2024-05-15 13:51:50.446573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60018 ] 00:13:52.161 [2024-05-15 13:51:50.589138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.161 [2024-05-15 13:51:50.687488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60034 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60034 /var/tmp/spdk2.sock 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60034 ']' 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.728 13:51:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:52.986 [2024-05-15 13:51:51.335884] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:52.986 [2024-05-15 13:51:51.335976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60034 ] 00:13:52.986 [2024-05-15 13:51:51.484225] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:52.986 [2024-05-15 13:51:51.484268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.243 [2024-05-15 13:51:51.679486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.809 13:51:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:53.809 13:51:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:53.809 13:51:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60018 00:13:53.809 13:51:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60018 00:13:53.809 13:51:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60018 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60018 ']' 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60018 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60018 00:13:54.744 killing process with pid 60018 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60018' 00:13:54.744 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60018 00:13:54.745 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60018 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60034 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60034 ']' 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60034 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60034 00:13:55.314 killing process with pid 60034 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60034' 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60034 00:13:55.314 13:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60034 00:13:55.573 00:13:55.573 real 0m3.721s 00:13:55.573 user 0m4.033s 00:13:55.573 sys 0m1.013s 00:13:55.573 13:51:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:55.573 13:51:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:55.573 ************************************ 00:13:55.573 END TEST non_locking_app_on_locked_coremask 00:13:55.573 ************************************ 00:13:55.832 13:51:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:55.832 13:51:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:55.832 13:51:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.832 13:51:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:55.832 ************************************ 00:13:55.832 START TEST locking_app_on_unlocked_coremask 00:13:55.832 ************************************ 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60095 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60095 /var/tmp/spdk.sock 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60095 ']' 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.832 13:51:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:55.832 [2024-05-15 13:51:54.231570] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:55.832 [2024-05-15 13:51:54.231645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60095 ] 00:13:55.832 [2024-05-15 13:51:54.365105] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:55.832 [2024-05-15 13:51:54.365153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.091 [2024-05-15 13:51:54.450080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60111 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60111 /var/tmp/spdk2.sock 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60111 ']' 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:56.659 13:51:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:56.659 [2024-05-15 13:51:55.102614] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:56.659 [2024-05-15 13:51:55.102866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:13:56.924 [2024-05-15 13:51:55.238378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.924 [2024-05-15 13:51:55.434176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.489 13:51:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.489 13:51:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:57.489 13:51:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60111 00:13:57.489 13:51:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60111 00:13:57.489 13:51:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60095 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60095 ']' 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 60095 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60095 00:13:58.864 killing process with pid 60095 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60095' 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 60095 00:13:58.864 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 60095 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60111 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60111 ']' 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 60111 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60111 00:13:59.433 killing process with pid 60111 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60111' 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 60111 00:13:59.433 13:51:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 60111 00:13:59.692 00:13:59.692 real 0m3.953s 00:13:59.692 user 0m4.358s 00:13:59.692 sys 0m1.051s 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.692 ************************************ 00:13:59.692 END TEST locking_app_on_unlocked_coremask 00:13:59.692 ************************************ 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:59.692 13:51:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:59.692 13:51:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:59.692 13:51:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.692 13:51:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:59.692 ************************************ 00:13:59.692 START TEST locking_app_on_locked_coremask 00:13:59.692 ************************************ 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60178 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60178 /var/tmp/spdk.sock 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60178 ']' 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.692 13:51:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:59.951 [2024-05-15 13:51:58.258937] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:13:59.951 [2024-05-15 13:51:58.259009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:13:59.951 [2024-05-15 13:51:58.396772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.951 [2024-05-15 13:51:58.493433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60189 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60189 /var/tmp/spdk2.sock 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60189 /var/tmp/spdk2.sock 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60189 /var/tmp/spdk2.sock 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60189 ']' 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:00.911 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:00.911 [2024-05-15 13:51:59.143490] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:00.911 [2024-05-15 13:51:59.143698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60189 ] 00:14:00.911 [2024-05-15 13:51:59.278336] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60178 has claimed it. 00:14:00.911 [2024-05-15 13:51:59.278389] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:01.479 ERROR: process (pid: 60189) is no longer running 00:14:01.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (60189) - No such process 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60178 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60178 00:14:01.479 13:51:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60178 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60178 ']' 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60178 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60178 00:14:01.739 killing process with pid 60178 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60178' 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60178 00:14:01.739 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60178 00:14:02.308 00:14:02.308 real 0m2.419s 00:14:02.308 user 0m2.630s 00:14:02.308 sys 0m0.626s 00:14:02.308 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.308 13:52:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:02.308 ************************************ 00:14:02.308 END TEST locking_app_on_locked_coremask 00:14:02.308 ************************************ 00:14:02.308 13:52:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:02.308 13:52:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:02.308 13:52:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.308 13:52:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:02.308 ************************************ 00:14:02.308 START TEST locking_overlapped_coremask 00:14:02.308 ************************************ 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60240 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60240 /var/tmp/spdk.sock 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 60240 ']' 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.308 13:52:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:02.308 [2024-05-15 13:52:00.753706] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:02.308 [2024-05-15 13:52:00.753808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60240 ] 00:14:02.567 [2024-05-15 13:52:00.895431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.567 [2024-05-15 13:52:00.997233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.567 [2024-05-15 13:52:00.997416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.567 [2024-05-15 13:52:00.997417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60258 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60258 /var/tmp/spdk2.sock 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60258 /var/tmp/spdk2.sock 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60258 /var/tmp/spdk2.sock 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 60258 ']' 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:03.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:03.134 13:52:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:03.134 [2024-05-15 13:52:01.638811] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:03.134 [2024-05-15 13:52:01.638880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60258 ] 00:14:03.394 [2024-05-15 13:52:01.774922] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60240 has claimed it. 00:14:03.394 [2024-05-15 13:52:01.774995] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:03.970 ERROR: process (pid: 60258) is no longer running 00:14:03.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (60258) - No such process 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60240 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 60240 ']' 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 60240 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60240 00:14:03.970 killing process with pid 60240 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60240' 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 60240 00:14:03.970 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 60240 00:14:04.228 00:14:04.228 real 0m1.988s 00:14:04.228 user 0m5.285s 00:14:04.229 sys 0m0.410s 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:04.229 ************************************ 00:14:04.229 END TEST locking_overlapped_coremask 00:14:04.229 ************************************ 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:04.229 13:52:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:14:04.229 13:52:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:04.229 13:52:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:04.229 13:52:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:04.229 ************************************ 00:14:04.229 START TEST locking_overlapped_coremask_via_rpc 00:14:04.229 ************************************ 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60298 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60298 /var/tmp/spdk.sock 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60298 ']' 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:04.229 13:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.487 [2024-05-15 13:52:02.810536] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:04.488 [2024-05-15 13:52:02.810612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60298 ] 00:14:04.488 [2024-05-15 13:52:02.952954] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:04.488 [2024-05-15 13:52:02.953029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.747 [2024-05-15 13:52:03.050135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.747 [2024-05-15 13:52:03.050334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.747 [2024-05-15 13:52:03.050336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60316 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60316 /var/tmp/spdk2.sock 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60316 ']' 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:05.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:05.320 13:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.320 [2024-05-15 13:52:03.697234] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:05.320 [2024-05-15 13:52:03.697683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:14:05.320 [2024-05-15 13:52:03.833761] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:05.320 [2024-05-15 13:52:03.833949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.579 [2024-05-15 13:52:04.035905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.579 [2024-05-15 13:52:04.039829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.579 [2024-05-15 13:52:04.039833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.146 [2024-05-15 13:52:04.574865] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60298 has claimed it. 00:14:06.146 request: 00:14:06.146 { 00:14:06.146 "method": "framework_enable_cpumask_locks", 00:14:06.146 "req_id": 1 00:14:06.146 } 00:14:06.146 Got JSON-RPC error response 00:14:06.146 response: 00:14:06.146 { 00:14:06.146 "code": -32603, 00:14:06.146 "message": "Failed to claim CPU core: 2" 00:14:06.146 } 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60298 /var/tmp/spdk.sock 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60298 ']' 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.146 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60316 /var/tmp/spdk2.sock 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60316 ']' 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.405 13:52:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:06.664 00:14:06.664 real 0m2.254s 00:14:06.664 user 0m0.996s 00:14:06.664 sys 0m0.190s 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:06.664 13:52:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.664 ************************************ 00:14:06.664 END TEST locking_overlapped_coremask_via_rpc 00:14:06.664 ************************************ 00:14:06.664 13:52:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:14:06.664 13:52:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60298 ]] 00:14:06.664 13:52:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60298 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60298 ']' 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60298 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60298 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:06.664 killing process with pid 60298 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60298' 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 60298 00:14:06.664 13:52:05 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 60298 00:14:06.923 13:52:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60316 ]] 00:14:06.923 13:52:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60316 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60316 ']' 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60316 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60316 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:06.923 killing process with pid 60316 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60316' 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 60316 00:14:06.923 13:52:05 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 60316 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60298 ]] 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60298 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60298 ']' 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60298 00:14:07.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (60298) - No such process 00:14:07.491 Process with pid 60298 is not found 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 60298 is not found' 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60316 ]] 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60316 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60316 ']' 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60316 00:14:07.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (60316) - No such process 00:14:07.491 Process with pid 60316 is not found 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 60316 is not found' 00:14:07.491 13:52:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:14:07.491 00:14:07.491 real 0m19.259s 00:14:07.491 user 0m31.828s 00:14:07.491 sys 0m5.229s 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.491 13:52:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 ************************************ 00:14:07.491 END TEST cpu_locks 00:14:07.491 ************************************ 00:14:07.491 00:14:07.491 real 0m46.179s 00:14:07.491 user 1m25.957s 00:14:07.491 sys 0m9.055s 00:14:07.491 13:52:05 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.491 13:52:05 event -- common/autotest_common.sh@10 -- # set +x 00:14:07.491 ************************************ 00:14:07.491 END TEST event 00:14:07.491 ************************************ 00:14:07.491 13:52:05 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:14:07.492 13:52:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:07.492 13:52:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.492 13:52:05 -- common/autotest_common.sh@10 -- # set +x 00:14:07.492 ************************************ 00:14:07.492 START TEST thread 00:14:07.492 ************************************ 00:14:07.492 13:52:05 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:14:07.751 * Looking for test storage... 00:14:07.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:14:07.751 13:52:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:07.751 13:52:06 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:14:07.751 13:52:06 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.751 13:52:06 thread -- common/autotest_common.sh@10 -- # set +x 00:14:07.751 ************************************ 00:14:07.751 START TEST thread_poller_perf 00:14:07.751 ************************************ 00:14:07.751 13:52:06 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:07.751 [2024-05-15 13:52:06.133466] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:07.751 [2024-05-15 13:52:06.133579] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:14:07.751 [2024-05-15 13:52:06.278444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.009 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:14:08.009 [2024-05-15 13:52:06.371279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.944 ====================================== 00:14:08.944 busy:2495893786 (cyc) 00:14:08.944 total_run_count: 398000 00:14:08.944 tsc_hz: 2490000000 (cyc) 00:14:08.944 ====================================== 00:14:08.944 poller_cost: 6271 (cyc), 2518 (nsec) 00:14:08.944 00:14:08.944 real 0m1.367s 00:14:08.944 user 0m1.207s 00:14:08.944 sys 0m0.054s 00:14:08.944 13:52:07 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.944 13:52:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.944 ************************************ 00:14:08.944 END TEST thread_poller_perf 00:14:08.944 ************************************ 00:14:09.203 13:52:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:09.203 13:52:07 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:14:09.203 13:52:07 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.203 13:52:07 thread -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 ************************************ 00:14:09.203 START TEST thread_poller_perf 00:14:09.203 ************************************ 00:14:09.203 13:52:07 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:09.203 [2024-05-15 13:52:07.564885] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:09.203 [2024-05-15 13:52:07.564984] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60463 ] 00:14:09.203 [2024-05-15 13:52:07.710235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.461 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:14:09.461 [2024-05-15 13:52:07.796129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.402 ====================================== 00:14:10.402 busy:2492087196 (cyc) 00:14:10.402 total_run_count: 5227000 00:14:10.402 tsc_hz: 2490000000 (cyc) 00:14:10.402 ====================================== 00:14:10.402 poller_cost: 476 (cyc), 191 (nsec) 00:14:10.402 ************************************ 00:14:10.402 END TEST thread_poller_perf 00:14:10.402 ************************************ 00:14:10.402 00:14:10.402 real 0m1.356s 00:14:10.402 user 0m1.186s 00:14:10.402 sys 0m0.063s 00:14:10.402 13:52:08 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.402 13:52:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:10.402 13:52:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:14:10.402 00:14:10.402 real 0m2.976s 00:14:10.402 user 0m2.483s 00:14:10.402 sys 0m0.284s 00:14:10.402 ************************************ 00:14:10.402 END TEST thread 00:14:10.402 ************************************ 00:14:10.402 13:52:08 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.402 13:52:08 thread -- common/autotest_common.sh@10 -- # set +x 00:14:10.670 13:52:09 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:14:10.670 13:52:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:10.670 13:52:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.670 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:14:10.670 ************************************ 00:14:10.670 START TEST accel 00:14:10.670 ************************************ 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:14:10.670 * Looking for test storage... 00:14:10.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:10.670 13:52:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:14:10.670 13:52:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:14:10.670 13:52:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:10.670 13:52:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=60543 00:14:10.670 13:52:09 accel -- accel/accel.sh@63 -- # waitforlisten 60543 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@827 -- # '[' -z 60543 ']' 00:14:10.670 13:52:09 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:14:10.670 13:52:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.670 13:52:09 accel -- common/autotest_common.sh@10 -- # set +x 00:14:10.670 13:52:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:10.670 13:52:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:10.670 13:52:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:10.670 13:52:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:10.670 13:52:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:10.670 13:52:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:14:10.670 13:52:09 accel -- accel/accel.sh@41 -- # jq -r . 00:14:10.670 [2024-05-15 13:52:09.200999] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:10.670 [2024-05-15 13:52:09.201649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:14:10.928 [2024-05-15 13:52:09.342116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.929 [2024-05-15 13:52:09.447450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.865 13:52:10 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.865 13:52:10 accel -- common/autotest_common.sh@860 -- # return 0 00:14:11.865 13:52:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:14:11.865 13:52:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:14:11.865 13:52:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:14:11.865 13:52:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:14:11.865 13:52:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:14:11.865 13:52:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:14:11.865 13:52:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:14:11.865 13:52:10 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.865 13:52:10 accel -- common/autotest_common.sh@10 -- # set +x 00:14:11.865 13:52:10 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.865 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.865 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.865 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.865 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.865 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.865 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.865 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.865 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.865 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.865 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # IFS== 00:14:11.866 13:52:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:14:11.866 13:52:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:11.866 13:52:10 accel -- accel/accel.sh@75 -- # killprocess 60543 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@946 -- # '[' -z 60543 ']' 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@950 -- # kill -0 60543 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@951 -- # uname 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60543 00:14:11.866 killing process with pid 60543 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60543' 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@965 -- # kill 60543 00:14:11.866 13:52:10 accel -- common/autotest_common.sh@970 -- # wait 60543 00:14:12.125 13:52:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:14:12.125 13:52:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@10 -- # set +x 00:14:12.125 13:52:10 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:14:12.125 13:52:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:14:12.125 13:52:10 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.125 13:52:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:14:12.125 13:52:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.125 13:52:10 accel -- common/autotest_common.sh@10 -- # set +x 00:14:12.125 ************************************ 00:14:12.125 START TEST accel_missing_filename 00:14:12.125 ************************************ 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.125 13:52:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:14:12.125 13:52:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:14:12.125 [2024-05-15 13:52:10.659894] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:12.125 [2024-05-15 13:52:10.660439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:14:12.385 [2024-05-15 13:52:10.821544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.385 [2024-05-15 13:52:10.927581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.645 [2024-05-15 13:52:10.971059] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.645 [2024-05-15 13:52:11.032608] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:14:12.645 A filename is required. 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.645 00:14:12.645 real 0m0.511s 00:14:12.645 user 0m0.330s 00:14:12.645 sys 0m0.119s 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.645 ************************************ 00:14:12.645 END TEST accel_missing_filename 00:14:12.645 ************************************ 00:14:12.645 13:52:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:14:12.645 13:52:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:12.645 13:52:11 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:14:12.645 13:52:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.645 13:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:14:12.645 ************************************ 00:14:12.645 START TEST accel_compress_verify 00:14:12.645 ************************************ 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.645 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:14:12.645 13:52:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:14:12.905 [2024-05-15 13:52:11.229792] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:12.905 [2024-05-15 13:52:11.230022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:14:12.905 [2024-05-15 13:52:11.371703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.183 [2024-05-15 13:52:11.478058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.183 [2024-05-15 13:52:11.524448] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:13.183 [2024-05-15 13:52:11.597069] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:14:13.183 00:14:13.183 Compression does not support the verify option, aborting. 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:14:13.183 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.183 00:14:13.184 real 0m0.527s 00:14:13.184 user 0m0.345s 00:14:13.184 sys 0m0.116s 00:14:13.184 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.184 13:52:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:14:13.184 ************************************ 00:14:13.184 END TEST accel_compress_verify 00:14:13.184 ************************************ 00:14:13.443 13:52:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:14:13.443 ************************************ 00:14:13.443 START TEST accel_wrong_workload 00:14:13.443 ************************************ 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:14:13.443 13:52:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:14:13.443 Unsupported workload type: foobar 00:14:13.443 [2024-05-15 13:52:11.820851] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:14:13.443 accel_perf options: 00:14:13.443 [-h help message] 00:14:13.443 [-q queue depth per core] 00:14:13.443 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:14:13.443 [-T number of threads per core 00:14:13.443 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:14:13.443 [-t time in seconds] 00:14:13.443 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:14:13.443 [ dif_verify, , dif_generate, dif_generate_copy 00:14:13.443 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:14:13.443 [-l for compress/decompress workloads, name of uncompressed input file 00:14:13.443 [-S for crc32c workload, use this seed value (default 0) 00:14:13.443 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:14:13.443 [-f for fill workload, use this BYTE value (default 255) 00:14:13.443 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:14:13.443 [-y verify result if this switch is on] 00:14:13.443 [-a tasks to allocate per core (default: same value as -q)] 00:14:13.443 Can be used to spread operations across a wider range of memory. 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.443 00:14:13.443 real 0m0.038s 00:14:13.443 user 0m0.023s 00:14:13.443 sys 0m0.015s 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.443 ************************************ 00:14:13.443 13:52:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:14:13.443 END TEST accel_wrong_workload 00:14:13.443 ************************************ 00:14:13.443 13:52:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.443 13:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:14:13.443 ************************************ 00:14:13.444 START TEST accel_negative_buffers 00:14:13.444 ************************************ 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:14:13.444 13:52:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:14:13.444 -x option must be non-negative. 00:14:13.444 [2024-05-15 13:52:11.916340] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:14:13.444 accel_perf options: 00:14:13.444 [-h help message] 00:14:13.444 [-q queue depth per core] 00:14:13.444 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:14:13.444 [-T number of threads per core 00:14:13.444 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:14:13.444 [-t time in seconds] 00:14:13.444 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:14:13.444 [ dif_verify, , dif_generate, dif_generate_copy 00:14:13.444 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:14:13.444 [-l for compress/decompress workloads, name of uncompressed input file 00:14:13.444 [-S for crc32c workload, use this seed value (default 0) 00:14:13.444 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:14:13.444 [-f for fill workload, use this BYTE value (default 255) 00:14:13.444 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:14:13.444 [-y verify result if this switch is on] 00:14:13.444 [-a tasks to allocate per core (default: same value as -q)] 00:14:13.444 Can be used to spread operations across a wider range of memory. 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.444 00:14:13.444 real 0m0.041s 00:14:13.444 user 0m0.022s 00:14:13.444 sys 0m0.018s 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.444 13:52:11 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:14:13.444 ************************************ 00:14:13.444 END TEST accel_negative_buffers 00:14:13.444 ************************************ 00:14:13.444 13:52:11 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:14:13.444 13:52:11 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:14:13.444 13:52:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.444 13:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:14:13.444 ************************************ 00:14:13.444 START TEST accel_crc32c 00:14:13.444 ************************************ 00:14:13.444 13:52:11 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:14:13.444 13:52:11 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:14:13.704 [2024-05-15 13:52:12.014901] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:13.704 [2024-05-15 13:52:12.014987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60683 ] 00:14:13.704 [2024-05-15 13:52:12.155709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.704 [2024-05-15 13:52:12.258254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:14:13.963 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:13.964 13:52:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.908 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:14.909 13:52:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:15.167 13:52:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:15.167 13:52:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:15.167 00:14:15.167 real 0m1.480s 00:14:15.167 user 0m0.018s 00:14:15.167 sys 0m0.003s 00:14:15.167 13:52:13 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.167 ************************************ 00:14:15.167 END TEST accel_crc32c 00:14:15.167 ************************************ 00:14:15.167 13:52:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:14:15.167 13:52:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:14:15.167 13:52:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:14:15.167 13:52:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.167 13:52:13 accel -- common/autotest_common.sh@10 -- # set +x 00:14:15.167 ************************************ 00:14:15.167 START TEST accel_crc32c_C2 00:14:15.167 ************************************ 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:14:15.168 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:14:15.168 [2024-05-15 13:52:13.562356] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:15.168 [2024-05-15 13:52:13.562443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60712 ] 00:14:15.168 [2024-05-15 13:52:13.703580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.426 [2024-05-15 13:52:13.807578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:14:15.426 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:15.427 13:52:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:16.846 00:14:16.846 real 0m1.482s 00:14:16.846 user 0m1.292s 00:14:16.846 sys 0m0.103s 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.846 13:52:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:14:16.846 ************************************ 00:14:16.846 END TEST accel_crc32c_C2 00:14:16.846 ************************************ 00:14:16.846 13:52:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:14:16.846 13:52:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:16.846 13:52:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.846 13:52:15 accel -- common/autotest_common.sh@10 -- # set +x 00:14:16.846 ************************************ 00:14:16.846 START TEST accel_copy 00:14:16.846 ************************************ 00:14:16.846 13:52:15 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:14:16.846 [2024-05-15 13:52:15.118634] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:16.846 [2024-05-15 13:52:15.118716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:14:16.846 [2024-05-15 13:52:15.259549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.846 [2024-05-15 13:52:15.354038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:16.846 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:17.106 13:52:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:18.042 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.043 13:52:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.043 13:52:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:18.043 13:52:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:14:18.043 ************************************ 00:14:18.043 END TEST accel_copy 00:14:18.043 ************************************ 00:14:18.043 13:52:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:18.043 00:14:18.043 real 0m1.468s 00:14:18.043 user 0m0.015s 00:14:18.043 sys 0m0.003s 00:14:18.043 13:52:16 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.043 13:52:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:14:18.303 13:52:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:18.303 13:52:16 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:18.303 13:52:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.303 13:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:14:18.303 ************************************ 00:14:18.303 START TEST accel_fill 00:14:18.303 ************************************ 00:14:18.303 13:52:16 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:14:18.303 13:52:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:14:18.303 [2024-05-15 13:52:16.649912] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:18.303 [2024-05-15 13:52:16.649999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:14:18.303 [2024-05-15 13:52:16.792204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.581 [2024-05-15 13:52:16.892854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.581 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:18.582 13:52:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:14:19.969 13:52:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:19.969 00:14:19.969 real 0m1.480s 00:14:19.969 user 0m1.292s 00:14:19.969 sys 0m0.097s 00:14:19.969 13:52:18 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:19.969 ************************************ 00:14:19.969 END TEST accel_fill 00:14:19.969 ************************************ 00:14:19.970 13:52:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:14:19.970 13:52:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:14:19.970 13:52:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:19.970 13:52:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.970 13:52:18 accel -- common/autotest_common.sh@10 -- # set +x 00:14:19.970 ************************************ 00:14:19.970 START TEST accel_copy_crc32c 00:14:19.970 ************************************ 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:14:19.970 [2024-05-15 13:52:18.186268] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:19.970 [2024-05-15 13:52:18.186379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:14:19.970 [2024-05-15 13:52:18.330174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.970 [2024-05-15 13:52:18.430372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:19.970 13:52:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.349 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:21.350 00:14:21.350 real 0m1.478s 00:14:21.350 user 0m1.288s 00:14:21.350 sys 0m0.102s 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.350 13:52:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 ************************************ 00:14:21.350 END TEST accel_copy_crc32c 00:14:21.350 ************************************ 00:14:21.350 13:52:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:14:21.350 13:52:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:14:21.350 13:52:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.350 13:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 ************************************ 00:14:21.350 START TEST accel_copy_crc32c_C2 00:14:21.350 ************************************ 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:14:21.350 13:52:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:14:21.350 [2024-05-15 13:52:19.735822] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:21.350 [2024-05-15 13:52:19.735904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60850 ] 00:14:21.350 [2024-05-15 13:52:19.876568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.609 [2024-05-15 13:52:19.976059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.609 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:21.610 13:52:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.003 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:23.004 00:14:23.004 real 0m1.477s 00:14:23.004 user 0m1.293s 00:14:23.004 sys 0m0.096s 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.004 13:52:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:14:23.004 ************************************ 00:14:23.004 END TEST accel_copy_crc32c_C2 00:14:23.004 ************************************ 00:14:23.004 13:52:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:14:23.004 13:52:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:23.004 13:52:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.004 13:52:21 accel -- common/autotest_common.sh@10 -- # set +x 00:14:23.004 ************************************ 00:14:23.004 START TEST accel_dualcast 00:14:23.004 ************************************ 00:14:23.004 13:52:21 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:14:23.004 [2024-05-15 13:52:21.261168] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:23.004 [2024-05-15 13:52:21.261244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60885 ] 00:14:23.004 [2024-05-15 13:52:21.404410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.004 [2024-05-15 13:52:21.500935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:23.004 13:52:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:14:24.382 13:52:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:24.382 00:14:24.382 real 0m1.467s 00:14:24.382 user 0m1.283s 00:14:24.382 sys 0m0.100s 00:14:24.382 13:52:22 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.382 13:52:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:14:24.382 ************************************ 00:14:24.382 END TEST accel_dualcast 00:14:24.382 ************************************ 00:14:24.382 13:52:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:14:24.382 13:52:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:24.382 13:52:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.382 13:52:22 accel -- common/autotest_common.sh@10 -- # set +x 00:14:24.383 ************************************ 00:14:24.383 START TEST accel_compare 00:14:24.383 ************************************ 00:14:24.383 13:52:22 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:14:24.383 13:52:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:14:24.383 [2024-05-15 13:52:22.809786] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:24.383 [2024-05-15 13:52:22.809880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60919 ] 00:14:24.642 [2024-05-15 13:52:22.949191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.642 [2024-05-15 13:52:23.050707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.642 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:24.643 13:52:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.018 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:14:26.019 13:52:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:26.019 00:14:26.019 real 0m1.475s 00:14:26.019 user 0m1.286s 00:14:26.019 sys 0m0.100s 00:14:26.019 13:52:24 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:26.019 ************************************ 00:14:26.019 END TEST accel_compare 00:14:26.019 ************************************ 00:14:26.019 13:52:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:14:26.019 13:52:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:14:26.019 13:52:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:26.019 13:52:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:26.019 13:52:24 accel -- common/autotest_common.sh@10 -- # set +x 00:14:26.019 ************************************ 00:14:26.019 START TEST accel_xor 00:14:26.019 ************************************ 00:14:26.019 13:52:24 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:14:26.019 13:52:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:14:26.019 [2024-05-15 13:52:24.356859] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:26.019 [2024-05-15 13:52:24.356944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60954 ] 00:14:26.019 [2024-05-15 13:52:24.488406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.278 [2024-05-15 13:52:24.588190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:26.278 13:52:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:27.658 00:14:27.658 real 0m1.465s 00:14:27.658 user 0m1.279s 00:14:27.658 sys 0m0.100s 00:14:27.658 13:52:25 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.658 13:52:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:14:27.658 ************************************ 00:14:27.658 END TEST accel_xor 00:14:27.658 ************************************ 00:14:27.658 13:52:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:27.658 13:52:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:14:27.658 13:52:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.658 13:52:25 accel -- common/autotest_common.sh@10 -- # set +x 00:14:27.658 ************************************ 00:14:27.658 START TEST accel_xor 00:14:27.658 ************************************ 00:14:27.658 13:52:25 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:14:27.658 13:52:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:14:27.658 [2024-05-15 13:52:25.879846] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:27.658 [2024-05-15 13:52:25.880083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60988 ] 00:14:27.658 [2024-05-15 13:52:26.016934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.658 [2024-05-15 13:52:26.105283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:27.658 13:52:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:29.036 ************************************ 00:14:29.036 END TEST accel_xor 00:14:29.036 ************************************ 00:14:29.036 13:52:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:29.037 00:14:29.037 real 0m1.462s 00:14:29.037 user 0m1.273s 00:14:29.037 sys 0m0.101s 00:14:29.037 13:52:27 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.037 13:52:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:14:29.037 13:52:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:29.037 13:52:27 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:14:29.037 13:52:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.037 13:52:27 accel -- common/autotest_common.sh@10 -- # set +x 00:14:29.037 ************************************ 00:14:29.037 START TEST accel_dif_verify 00:14:29.037 ************************************ 00:14:29.037 13:52:27 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:14:29.037 13:52:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:14:29.037 [2024-05-15 13:52:27.408574] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:29.037 [2024-05-15 13:52:27.408786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61023 ] 00:14:29.037 [2024-05-15 13:52:27.541476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.373 [2024-05-15 13:52:27.641023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:29.373 13:52:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:30.313 ************************************ 00:14:30.313 END TEST accel_dif_verify 00:14:30.313 ************************************ 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:14:30.313 13:52:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:30.313 00:14:30.313 real 0m1.466s 00:14:30.313 user 0m0.017s 00:14:30.313 sys 0m0.004s 00:14:30.313 13:52:28 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.313 13:52:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 13:52:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:30.572 13:52:28 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:14:30.572 13:52:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:30.572 13:52:28 accel -- common/autotest_common.sh@10 -- # set +x 00:14:30.572 ************************************ 00:14:30.572 START TEST accel_dif_generate 00:14:30.572 ************************************ 00:14:30.572 13:52:28 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:14:30.572 13:52:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:14:30.572 [2024-05-15 13:52:28.936468] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:30.572 [2024-05-15 13:52:28.936547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:14:30.572 [2024-05-15 13:52:29.073969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.832 [2024-05-15 13:52:29.173531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:30.832 13:52:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:14:32.211 13:52:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:32.211 00:14:32.211 real 0m1.471s 00:14:32.211 user 0m1.285s 00:14:32.211 sys 0m0.099s 00:14:32.211 13:52:30 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.211 13:52:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:14:32.211 ************************************ 00:14:32.211 END TEST accel_dif_generate 00:14:32.211 ************************************ 00:14:32.211 13:52:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:32.211 13:52:30 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:14:32.211 13:52:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.211 13:52:30 accel -- common/autotest_common.sh@10 -- # set +x 00:14:32.211 ************************************ 00:14:32.211 START TEST accel_dif_generate_copy 00:14:32.211 ************************************ 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:14:32.211 [2024-05-15 13:52:30.470095] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:32.211 [2024-05-15 13:52:30.470175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61094 ] 00:14:32.211 [2024-05-15 13:52:30.619815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.211 [2024-05-15 13:52:30.713732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.211 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.212 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:32.481 13:52:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:33.439 ************************************ 00:14:33.439 END TEST accel_dif_generate_copy 00:14:33.439 ************************************ 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:33.439 00:14:33.439 real 0m1.482s 00:14:33.439 user 0m1.277s 00:14:33.439 sys 0m0.115s 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.439 13:52:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:14:33.439 13:52:31 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:14:33.439 13:52:31 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.439 13:52:31 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:14:33.439 13:52:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.439 13:52:31 accel -- common/autotest_common.sh@10 -- # set +x 00:14:33.698 ************************************ 00:14:33.698 START TEST accel_comp 00:14:33.698 ************************************ 00:14:33.698 13:52:31 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.698 13:52:31 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:14:33.698 13:52:31 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:33.698 13:52:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:33.699 13:52:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:14:33.699 13:52:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:14:33.699 [2024-05-15 13:52:32.028258] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:33.699 [2024-05-15 13:52:32.028482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:14:33.699 [2024-05-15 13:52:32.169706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.959 [2024-05-15 13:52:32.270969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:33.959 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:33.960 13:52:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:14:35.339 13:52:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:35.339 00:14:35.339 real 0m1.486s 00:14:35.339 user 0m1.292s 00:14:35.339 sys 0m0.106s 00:14:35.339 13:52:33 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:35.339 13:52:33 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:14:35.339 ************************************ 00:14:35.339 END TEST accel_comp 00:14:35.339 ************************************ 00:14:35.339 13:52:33 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:35.339 13:52:33 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:14:35.339 13:52:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:35.339 13:52:33 accel -- common/autotest_common.sh@10 -- # set +x 00:14:35.339 ************************************ 00:14:35.339 START TEST accel_decomp 00:14:35.339 ************************************ 00:14:35.339 13:52:33 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:35.339 13:52:33 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:14:35.339 13:52:33 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:14:35.339 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.339 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.339 13:52:33 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:14:35.340 [2024-05-15 13:52:33.586524] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:35.340 [2024-05-15 13:52:33.586629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61163 ] 00:14:35.340 [2024-05-15 13:52:33.730681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.340 [2024-05-15 13:52:33.833148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:35.340 13:52:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 ************************************ 00:14:36.737 END TEST accel_decomp 00:14:36.737 ************************************ 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:36.737 13:52:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:36.737 00:14:36.737 real 0m1.493s 00:14:36.737 user 0m1.295s 00:14:36.737 sys 0m0.110s 00:14:36.737 13:52:35 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.737 13:52:35 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:14:36.737 13:52:35 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:36.737 13:52:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:36.737 13:52:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:36.737 13:52:35 accel -- common/autotest_common.sh@10 -- # set +x 00:14:36.737 ************************************ 00:14:36.737 START TEST accel_decmop_full 00:14:36.737 ************************************ 00:14:36.737 13:52:35 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:14:36.737 13:52:35 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:14:36.737 [2024-05-15 13:52:35.146487] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:36.737 [2024-05-15 13:52:35.146570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61192 ] 00:14:36.737 [2024-05-15 13:52:35.286353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.996 [2024-05-15 13:52:35.387512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:14:36.996 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:36.997 13:52:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:38.375 13:52:36 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:38.375 00:14:38.375 real 0m1.497s 00:14:38.375 user 0m1.304s 00:14:38.375 sys 0m0.104s 00:14:38.375 ************************************ 00:14:38.375 END TEST accel_decmop_full 00:14:38.375 ************************************ 00:14:38.375 13:52:36 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:38.375 13:52:36 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 13:52:36 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:38.375 13:52:36 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:38.375 13:52:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:38.375 13:52:36 accel -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 ************************************ 00:14:38.375 START TEST accel_decomp_mcore 00:14:38.375 ************************************ 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:14:38.375 13:52:36 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:14:38.375 [2024-05-15 13:52:36.708822] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:38.375 [2024-05-15 13:52:36.709104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61232 ] 00:14:38.375 [2024-05-15 13:52:36.855339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.635 [2024-05-15 13:52:36.958521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.635 [2024-05-15 13:52:36.958641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.635 [2024-05-15 13:52:36.958816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.635 [2024-05-15 13:52:36.958821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:38.635 13:52:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 ************************************ 00:14:40.011 END TEST accel_decomp_mcore 00:14:40.011 ************************************ 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:40.011 00:14:40.011 real 0m1.515s 00:14:40.011 user 0m4.651s 00:14:40.011 sys 0m0.135s 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.011 13:52:38 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:14:40.011 13:52:38 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.011 13:52:38 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:40.011 13:52:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.011 13:52:38 accel -- common/autotest_common.sh@10 -- # set +x 00:14:40.011 ************************************ 00:14:40.011 START TEST accel_decomp_full_mcore 00:14:40.011 ************************************ 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:14:40.011 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:14:40.011 [2024-05-15 13:52:38.298440] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:40.011 [2024-05-15 13:52:38.298522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:14:40.011 [2024-05-15 13:52:38.439215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.011 [2024-05-15 13:52:38.543168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.011 [2024-05-15 13:52:38.543353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.011 [2024-05-15 13:52:38.544259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.011 [2024-05-15 13:52:38.544261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:40.310 13:52:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:41.246 00:14:41.246 real 0m1.518s 00:14:41.246 user 0m4.649s 00:14:41.246 sys 0m0.127s 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.246 13:52:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:14:41.246 ************************************ 00:14:41.246 END TEST accel_decomp_full_mcore 00:14:41.246 ************************************ 00:14:41.505 13:52:39 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.505 13:52:39 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:41.505 13:52:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.505 13:52:39 accel -- common/autotest_common.sh@10 -- # set +x 00:14:41.505 ************************************ 00:14:41.505 START TEST accel_decomp_mthread 00:14:41.505 ************************************ 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:41.505 13:52:39 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:41.505 [2024-05-15 13:52:39.882012] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:41.505 [2024-05-15 13:52:39.882275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61307 ] 00:14:41.505 [2024-05-15 13:52:40.024401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.765 [2024-05-15 13:52:40.126877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:41.765 13:52:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:43.144 00:14:43.144 real 0m1.494s 00:14:43.144 user 0m1.308s 00:14:43.144 sys 0m0.097s 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.144 ************************************ 00:14:43.144 END TEST accel_decomp_mthread 00:14:43.144 13:52:41 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:43.144 ************************************ 00:14:43.144 13:52:41 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:43.144 13:52:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:43.144 13:52:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.144 13:52:41 accel -- common/autotest_common.sh@10 -- # set +x 00:14:43.144 ************************************ 00:14:43.144 START TEST accel_decomp_full_mthread 00:14:43.144 ************************************ 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:43.144 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:43.144 [2024-05-15 13:52:41.433058] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:43.144 [2024-05-15 13:52:41.433221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61336 ] 00:14:43.144 [2024-05-15 13:52:41.580133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.144 [2024-05-15 13:52:41.678584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:43.404 13:52:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:44.784 00:14:44.784 real 0m1.505s 00:14:44.784 user 0m1.308s 00:14:44.784 sys 0m0.109s 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.784 ************************************ 00:14:44.784 END TEST accel_decomp_full_mthread 00:14:44.784 ************************************ 00:14:44.784 13:52:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:44.784 13:52:42 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:14:44.784 13:52:42 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:44.784 13:52:42 accel -- accel/accel.sh@137 -- # build_accel_config 00:14:44.784 13:52:42 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:44.784 13:52:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:44.784 13:52:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:44.784 13:52:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:44.784 13:52:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.784 13:52:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:44.784 13:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:14:44.784 13:52:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:44.784 13:52:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:14:44.784 13:52:42 accel -- accel/accel.sh@41 -- # jq -r . 00:14:44.784 ************************************ 00:14:44.784 START TEST accel_dif_functional_tests 00:14:44.784 ************************************ 00:14:44.784 13:52:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:44.784 [2024-05-15 13:52:43.036421] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:44.784 [2024-05-15 13:52:43.036499] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61378 ] 00:14:44.784 [2024-05-15 13:52:43.177787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:44.784 [2024-05-15 13:52:43.276140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.784 [2024-05-15 13:52:43.276254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.784 [2024-05-15 13:52:43.276255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.054 00:14:45.054 00:14:45.054 CUnit - A unit testing framework for C - Version 2.1-3 00:14:45.054 http://cunit.sourceforge.net/ 00:14:45.054 00:14:45.054 00:14:45.054 Suite: accel_dif 00:14:45.054 Test: verify: DIF generated, GUARD check ...passed 00:14:45.054 Test: verify: DIF generated, APPTAG check ...passed 00:14:45.054 Test: verify: DIF generated, REFTAG check ...passed 00:14:45.054 Test: verify: DIF not generated, GUARD check ...passed 00:14:45.054 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 13:52:43.347858] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:45.054 [2024-05-15 13:52:43.347991] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:45.054 [2024-05-15 13:52:43.348033] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:45.054 passed 00:14:45.054 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 13:52:43.348134] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:45.054 passed 00:14:45.054 Test: verify: APPTAG correct, APPTAG check ...passed 00:14:45.054 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:14:45.054 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:14:45.054 Test: verify: REFTAG incorrect, REFTAG ignore ...passed[2024-05-15 13:52:43.348164] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:45.054 [2024-05-15 13:52:43.348185] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:45.054 [2024-05-15 13:52:43.348311] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:45.054 00:14:45.054 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:14:45.054 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:14:45.054 Test: generate copy: DIF generated, GUARD check ...passed 00:14:45.054 Test: generate copy: DIF generated, APTTAG check ...passed 00:14:45.054 Test: generate copy: DIF generated, REFTAG check ...[2024-05-15 13:52:43.348598] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:45.054 passed 00:14:45.054 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:14:45.054 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:14:45.054 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:14:45.054 Test: generate copy: iovecs-len validate ...passed 00:14:45.054 Test: generate copy: buffer alignment validate ...[2024-05-15 13:52:43.348882] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:14:45.054 passed 00:14:45.054 00:14:45.054 Run Summary: Type Total Ran Passed Failed Inactive 00:14:45.054 suites 1 1 n/a 0 0 00:14:45.054 tests 20 20 20 0 0 00:14:45.054 asserts 204 204 204 0 n/a 00:14:45.054 00:14:45.054 Elapsed time = 0.004 seconds 00:14:45.054 00:14:45.054 real 0m0.570s 00:14:45.054 user 0m0.700s 00:14:45.054 sys 0m0.135s 00:14:45.054 13:52:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.054 ************************************ 00:14:45.054 13:52:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:14:45.054 END TEST accel_dif_functional_tests 00:14:45.054 ************************************ 00:14:45.054 ************************************ 00:14:45.054 END TEST accel 00:14:45.054 ************************************ 00:14:45.054 00:14:45.054 real 0m34.586s 00:14:45.054 user 0m36.023s 00:14:45.054 sys 0m4.061s 00:14:45.054 13:52:43 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.054 13:52:43 accel -- common/autotest_common.sh@10 -- # set +x 00:14:45.312 13:52:43 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:45.312 13:52:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:45.312 13:52:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.312 13:52:43 -- common/autotest_common.sh@10 -- # set +x 00:14:45.312 ************************************ 00:14:45.312 START TEST accel_rpc 00:14:45.312 ************************************ 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:45.312 * Looking for test storage... 00:14:45.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:45.312 13:52:43 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:45.312 13:52:43 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61448 00:14:45.312 13:52:43 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:45.312 13:52:43 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 61448 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 61448 ']' 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.312 13:52:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.312 [2024-05-15 13:52:43.860661] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:45.312 [2024-05-15 13:52:43.861483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61448 ] 00:14:45.571 [2024-05-15 13:52:44.002126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.571 [2024-05-15 13:52:44.097198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.138 13:52:44 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.397 13:52:44 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:46.397 13:52:44 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:14:46.397 13:52:44 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:14:46.397 13:52:44 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:14:46.397 13:52:44 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:14:46.397 13:52:44 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:14:46.397 13:52:44 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:46.397 13:52:44 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:46.397 13:52:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 ************************************ 00:14:46.397 START TEST accel_assign_opcode 00:14:46.397 ************************************ 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 [2024-05-15 13:52:44.716828] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 [2024-05-15 13:52:44.728801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:14:46.397 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.655 software 00:14:46.655 00:14:46.655 ************************************ 00:14:46.655 END TEST accel_assign_opcode 00:14:46.655 ************************************ 00:14:46.655 real 0m0.252s 00:14:46.655 user 0m0.046s 00:14:46.655 sys 0m0.021s 00:14:46.655 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.655 13:52:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:46.655 13:52:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 61448 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 61448 ']' 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 61448 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61448 00:14:46.655 killing process with pid 61448 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61448' 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@965 -- # kill 61448 00:14:46.655 13:52:45 accel_rpc -- common/autotest_common.sh@970 -- # wait 61448 00:14:46.914 00:14:46.914 real 0m1.737s 00:14:46.914 user 0m1.724s 00:14:46.914 sys 0m0.451s 00:14:46.914 13:52:45 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.914 13:52:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.914 ************************************ 00:14:46.914 END TEST accel_rpc 00:14:46.914 ************************************ 00:14:46.914 13:52:45 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:46.914 13:52:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:46.914 13:52:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:47.173 13:52:45 -- common/autotest_common.sh@10 -- # set +x 00:14:47.173 ************************************ 00:14:47.173 START TEST app_cmdline 00:14:47.173 ************************************ 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:47.173 * Looking for test storage... 00:14:47.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:47.173 13:52:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:47.173 13:52:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61532 00:14:47.173 13:52:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:47.173 13:52:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61532 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 61532 ']' 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:47.173 13:52:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:47.173 [2024-05-15 13:52:45.666632] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:47.173 [2024-05-15 13:52:45.666710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:14:47.432 [2024-05-15 13:52:45.805945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.432 [2024-05-15 13:52:45.911906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.000 13:52:46 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.000 13:52:46 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:14:48.000 13:52:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:14:48.258 { 00:14:48.258 "version": "SPDK v24.05-pre git sha1 c3870302f", 00:14:48.258 "fields": { 00:14:48.258 "major": 24, 00:14:48.258 "minor": 5, 00:14:48.258 "patch": 0, 00:14:48.258 "suffix": "-pre", 00:14:48.258 "commit": "c3870302f" 00:14:48.258 } 00:14:48.258 } 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:48.258 13:52:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.258 13:52:46 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.259 13:52:46 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:48.259 13:52:46 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:48.519 request: 00:14:48.519 { 00:14:48.519 "method": "env_dpdk_get_mem_stats", 00:14:48.519 "req_id": 1 00:14:48.519 } 00:14:48.519 Got JSON-RPC error response 00:14:48.519 response: 00:14:48.519 { 00:14:48.519 "code": -32601, 00:14:48.519 "message": "Method not found" 00:14:48.519 } 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.519 13:52:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61532 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 61532 ']' 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 61532 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:48.519 13:52:46 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61532 00:14:48.519 killing process with pid 61532 00:14:48.519 13:52:47 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:48.519 13:52:47 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:48.519 13:52:47 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61532' 00:14:48.519 13:52:47 app_cmdline -- common/autotest_common.sh@965 -- # kill 61532 00:14:48.519 13:52:47 app_cmdline -- common/autotest_common.sh@970 -- # wait 61532 00:14:49.087 00:14:49.087 real 0m1.883s 00:14:49.087 user 0m2.193s 00:14:49.087 sys 0m0.471s 00:14:49.087 13:52:47 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.087 ************************************ 00:14:49.087 END TEST app_cmdline 00:14:49.087 ************************************ 00:14:49.087 13:52:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:49.087 13:52:47 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:49.087 13:52:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:49.087 13:52:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.087 13:52:47 -- common/autotest_common.sh@10 -- # set +x 00:14:49.087 ************************************ 00:14:49.087 START TEST version 00:14:49.087 ************************************ 00:14:49.087 13:52:47 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:49.087 * Looking for test storage... 00:14:49.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:49.087 13:52:47 version -- app/version.sh@17 -- # get_header_version major 00:14:49.087 13:52:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # cut -f2 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # tr -d '"' 00:14:49.087 13:52:47 version -- app/version.sh@17 -- # major=24 00:14:49.087 13:52:47 version -- app/version.sh@18 -- # get_header_version minor 00:14:49.087 13:52:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # cut -f2 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # tr -d '"' 00:14:49.087 13:52:47 version -- app/version.sh@18 -- # minor=5 00:14:49.087 13:52:47 version -- app/version.sh@19 -- # get_header_version patch 00:14:49.087 13:52:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # cut -f2 00:14:49.087 13:52:47 version -- app/version.sh@14 -- # tr -d '"' 00:14:49.087 13:52:47 version -- app/version.sh@19 -- # patch=0 00:14:49.087 13:52:47 version -- app/version.sh@20 -- # get_header_version suffix 00:14:49.088 13:52:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:49.088 13:52:47 version -- app/version.sh@14 -- # cut -f2 00:14:49.088 13:52:47 version -- app/version.sh@14 -- # tr -d '"' 00:14:49.088 13:52:47 version -- app/version.sh@20 -- # suffix=-pre 00:14:49.088 13:52:47 version -- app/version.sh@22 -- # version=24.5 00:14:49.088 13:52:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:14:49.088 13:52:47 version -- app/version.sh@28 -- # version=24.5rc0 00:14:49.088 13:52:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:49.088 13:52:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:49.347 13:52:47 version -- app/version.sh@30 -- # py_version=24.5rc0 00:14:49.347 13:52:47 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:14:49.347 00:14:49.347 real 0m0.220s 00:14:49.347 user 0m0.107s 00:14:49.347 sys 0m0.165s 00:14:49.347 ************************************ 00:14:49.347 END TEST version 00:14:49.347 ************************************ 00:14:49.347 13:52:47 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.347 13:52:47 version -- common/autotest_common.sh@10 -- # set +x 00:14:49.347 13:52:47 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:14:49.347 13:52:47 -- spdk/autotest.sh@194 -- # uname -s 00:14:49.347 13:52:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:14:49.347 13:52:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:49.347 13:52:47 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:14:49.347 13:52:47 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:14:49.347 13:52:47 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:14:49.347 13:52:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:49.347 13:52:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.347 13:52:47 -- common/autotest_common.sh@10 -- # set +x 00:14:49.347 ************************************ 00:14:49.347 START TEST spdk_dd 00:14:49.347 ************************************ 00:14:49.347 13:52:47 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:14:49.347 * Looking for test storage... 00:14:49.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:49.347 13:52:47 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.347 13:52:47 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.347 13:52:47 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.347 13:52:47 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.347 13:52:47 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.347 13:52:47 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.347 13:52:47 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.347 13:52:47 spdk_dd -- paths/export.sh@5 -- # export PATH 00:14:49.347 13:52:47 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.347 13:52:47 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:49.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.914 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:49.914 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:49.914 13:52:48 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:14:49.914 13:52:48 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@230 -- # local class 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@232 -- # local progif 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@233 -- # class=01 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@15 -- # local i 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@24 -- # return 0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@15 -- # local i 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@24 -- # return 0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:49.914 13:52:48 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:14:50.176 13:52:48 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:50.176 13:52:48 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:50.176 13:52:48 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:14:50.176 13:52:48 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:50.176 13:52:48 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@139 -- # local lib so 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.176 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:14:50.177 * spdk_dd linked to liburing 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:50.177 13:52:48 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:50.177 13:52:48 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:50.178 13:52:48 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:14:50.178 13:52:48 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:14:50.178 13:52:48 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:14:50.178 13:52:48 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:14:50.178 13:52:48 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:14:50.178 13:52:48 spdk_dd -- dd/common.sh@157 -- # return 0 00:14:50.178 13:52:48 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:14:50.178 13:52:48 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:14:50.178 13:52:48 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:50.178 13:52:48 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:50.178 13:52:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:50.178 ************************************ 00:14:50.178 START TEST spdk_dd_basic_rw 00:14:50.178 ************************************ 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:14:50.178 * Looking for test storage... 00:14:50.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:14:50.178 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:14:50.439 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:14:50.439 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:50.440 ************************************ 00:14:50.440 START TEST dd_bs_lt_native_bs 00:14:50.440 ************************************ 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:50.440 13:52:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:50.440 { 00:14:50.440 "subsystems": [ 00:14:50.440 { 00:14:50.440 "subsystem": "bdev", 00:14:50.440 "config": [ 00:14:50.440 { 00:14:50.440 "params": { 00:14:50.440 "trtype": "pcie", 00:14:50.440 "traddr": "0000:00:10.0", 00:14:50.440 "name": "Nvme0" 00:14:50.440 }, 00:14:50.440 "method": "bdev_nvme_attach_controller" 00:14:50.440 }, 00:14:50.440 { 00:14:50.440 "method": "bdev_wait_for_examine" 00:14:50.440 } 00:14:50.440 ] 00:14:50.440 } 00:14:50.440 ] 00:14:50.440 } 00:14:50.440 [2024-05-15 13:52:48.991176] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:50.440 [2024-05-15 13:52:48.991261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:14:50.698 [2024-05-15 13:52:49.135831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.698 [2024-05-15 13:52:49.221326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.956 [2024-05-15 13:52:49.354613] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:14:50.956 [2024-05-15 13:52:49.354668] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:50.956 [2024-05-15 13:52:49.461501] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.215 00:14:51.215 real 0m0.651s 00:14:51.215 user 0m0.451s 00:14:51.215 sys 0m0.148s 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:14:51.215 ************************************ 00:14:51.215 END TEST dd_bs_lt_native_bs 00:14:51.215 ************************************ 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:51.215 ************************************ 00:14:51.215 START TEST dd_rw 00:14:51.215 ************************************ 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:51.215 13:52:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:14:51.785 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:51.785 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:51.785 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:51.785 [2024-05-15 13:52:50.233258] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:51.785 [2024-05-15 13:52:50.233323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61892 ] 00:14:51.785 { 00:14:51.785 "subsystems": [ 00:14:51.785 { 00:14:51.785 "subsystem": "bdev", 00:14:51.785 "config": [ 00:14:51.785 { 00:14:51.785 "params": { 00:14:51.785 "trtype": "pcie", 00:14:51.785 "traddr": "0000:00:10.0", 00:14:51.785 "name": "Nvme0" 00:14:51.785 }, 00:14:51.785 "method": "bdev_nvme_attach_controller" 00:14:51.785 }, 00:14:51.785 { 00:14:51.785 "method": "bdev_wait_for_examine" 00:14:51.785 } 00:14:51.785 ] 00:14:51.785 } 00:14:51.785 ] 00:14:51.785 } 00:14:52.043 [2024-05-15 13:52:50.368493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.043 [2024-05-15 13:52:50.456115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.302  Copying: 60/60 [kB] (average 19 MBps) 00:14:52.302 00:14:52.302 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:14:52.302 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:52.302 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:52.302 13:52:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:52.302 { 00:14:52.302 "subsystems": [ 00:14:52.302 { 00:14:52.302 "subsystem": "bdev", 00:14:52.302 "config": [ 00:14:52.302 { 00:14:52.302 "params": { 00:14:52.302 "trtype": "pcie", 00:14:52.302 "traddr": "0000:00:10.0", 00:14:52.302 "name": "Nvme0" 00:14:52.302 }, 00:14:52.302 "method": "bdev_nvme_attach_controller" 00:14:52.302 }, 00:14:52.302 { 00:14:52.302 "method": "bdev_wait_for_examine" 00:14:52.302 } 00:14:52.302 ] 00:14:52.302 } 00:14:52.302 ] 00:14:52.302 } 00:14:52.561 [2024-05-15 13:52:50.865670] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:52.561 [2024-05-15 13:52:50.865751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:14:52.561 [2024-05-15 13:52:51.006740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.561 [2024-05-15 13:52:51.093973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.078  Copying: 60/60 [kB] (average 14 MBps) 00:14:53.078 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:53.078 13:52:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:53.078 [2024-05-15 13:52:51.511139] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:53.078 [2024-05-15 13:52:51.511212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61921 ] 00:14:53.078 { 00:14:53.078 "subsystems": [ 00:14:53.078 { 00:14:53.078 "subsystem": "bdev", 00:14:53.078 "config": [ 00:14:53.078 { 00:14:53.078 "params": { 00:14:53.078 "trtype": "pcie", 00:14:53.078 "traddr": "0000:00:10.0", 00:14:53.078 "name": "Nvme0" 00:14:53.078 }, 00:14:53.078 "method": "bdev_nvme_attach_controller" 00:14:53.078 }, 00:14:53.078 { 00:14:53.078 "method": "bdev_wait_for_examine" 00:14:53.078 } 00:14:53.078 ] 00:14:53.078 } 00:14:53.078 ] 00:14:53.078 } 00:14:53.337 [2024-05-15 13:52:51.655034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.337 [2024-05-15 13:52:51.738077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.595  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:53.595 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:53.595 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:54.162 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:14:54.162 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:54.162 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:54.162 13:52:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:54.162 [2024-05-15 13:52:52.715434] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:54.162 [2024-05-15 13:52:52.715508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:14:54.162 { 00:14:54.162 "subsystems": [ 00:14:54.162 { 00:14:54.162 "subsystem": "bdev", 00:14:54.162 "config": [ 00:14:54.162 { 00:14:54.162 "params": { 00:14:54.162 "trtype": "pcie", 00:14:54.162 "traddr": "0000:00:10.0", 00:14:54.162 "name": "Nvme0" 00:14:54.162 }, 00:14:54.162 "method": "bdev_nvme_attach_controller" 00:14:54.162 }, 00:14:54.162 { 00:14:54.162 "method": "bdev_wait_for_examine" 00:14:54.162 } 00:14:54.162 ] 00:14:54.162 } 00:14:54.162 ] 00:14:54.162 } 00:14:54.421 [2024-05-15 13:52:52.858404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.421 [2024-05-15 13:52:52.962825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.938  Copying: 60/60 [kB] (average 58 MBps) 00:14:54.938 00:14:54.938 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:14:54.938 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:54.938 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:54.938 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:54.938 [2024-05-15 13:52:53.368680] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:54.938 [2024-05-15 13:52:53.368765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61954 ] 00:14:54.938 { 00:14:54.938 "subsystems": [ 00:14:54.938 { 00:14:54.938 "subsystem": "bdev", 00:14:54.938 "config": [ 00:14:54.938 { 00:14:54.938 "params": { 00:14:54.938 "trtype": "pcie", 00:14:54.938 "traddr": "0000:00:10.0", 00:14:54.938 "name": "Nvme0" 00:14:54.938 }, 00:14:54.938 "method": "bdev_nvme_attach_controller" 00:14:54.938 }, 00:14:54.938 { 00:14:54.938 "method": "bdev_wait_for_examine" 00:14:54.938 } 00:14:54.938 ] 00:14:54.938 } 00:14:54.938 ] 00:14:54.938 } 00:14:55.196 [2024-05-15 13:52:53.509719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.196 [2024-05-15 13:52:53.601034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.465  Copying: 60/60 [kB] (average 29 MBps) 00:14:55.465 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:55.465 13:52:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:55.758 [2024-05-15 13:52:54.018836] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:55.758 [2024-05-15 13:52:54.018905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:14:55.758 { 00:14:55.758 "subsystems": [ 00:14:55.758 { 00:14:55.758 "subsystem": "bdev", 00:14:55.758 "config": [ 00:14:55.758 { 00:14:55.758 "params": { 00:14:55.758 "trtype": "pcie", 00:14:55.758 "traddr": "0000:00:10.0", 00:14:55.758 "name": "Nvme0" 00:14:55.758 }, 00:14:55.758 "method": "bdev_nvme_attach_controller" 00:14:55.758 }, 00:14:55.758 { 00:14:55.758 "method": "bdev_wait_for_examine" 00:14:55.758 } 00:14:55.758 ] 00:14:55.758 } 00:14:55.758 ] 00:14:55.758 } 00:14:55.758 [2024-05-15 13:52:54.159498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.758 [2024-05-15 13:52:54.266280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.272  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:56.272 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:56.272 13:52:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:56.838 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:14:56.838 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:56.838 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:56.838 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:56.838 [2024-05-15 13:52:55.145979] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:56.838 [2024-05-15 13:52:55.146050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ] 00:14:56.838 { 00:14:56.838 "subsystems": [ 00:14:56.838 { 00:14:56.838 "subsystem": "bdev", 00:14:56.838 "config": [ 00:14:56.838 { 00:14:56.838 "params": { 00:14:56.838 "trtype": "pcie", 00:14:56.838 "traddr": "0000:00:10.0", 00:14:56.838 "name": "Nvme0" 00:14:56.838 }, 00:14:56.838 "method": "bdev_nvme_attach_controller" 00:14:56.838 }, 00:14:56.838 { 00:14:56.838 "method": "bdev_wait_for_examine" 00:14:56.838 } 00:14:56.838 ] 00:14:56.838 } 00:14:56.838 ] 00:14:56.838 } 00:14:56.838 [2024-05-15 13:52:55.284691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.838 [2024-05-15 13:52:55.389885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.355  Copying: 56/56 [kB] (average 27 MBps) 00:14:57.355 00:14:57.355 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:14:57.355 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:57.355 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:57.355 13:52:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:57.355 [2024-05-15 13:52:55.803276] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:57.355 [2024-05-15 13:52:55.803346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62007 ] 00:14:57.355 { 00:14:57.355 "subsystems": [ 00:14:57.355 { 00:14:57.355 "subsystem": "bdev", 00:14:57.355 "config": [ 00:14:57.355 { 00:14:57.355 "params": { 00:14:57.355 "trtype": "pcie", 00:14:57.355 "traddr": "0000:00:10.0", 00:14:57.355 "name": "Nvme0" 00:14:57.355 }, 00:14:57.355 "method": "bdev_nvme_attach_controller" 00:14:57.355 }, 00:14:57.355 { 00:14:57.355 "method": "bdev_wait_for_examine" 00:14:57.355 } 00:14:57.355 ] 00:14:57.355 } 00:14:57.355 ] 00:14:57.355 } 00:14:57.613 [2024-05-15 13:52:55.942244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.613 [2024-05-15 13:52:56.039856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.872  Copying: 56/56 [kB] (average 27 MBps) 00:14:57.872 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:57.872 13:52:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:58.131 { 00:14:58.131 "subsystems": [ 00:14:58.131 { 00:14:58.131 "subsystem": "bdev", 00:14:58.131 "config": [ 00:14:58.131 { 00:14:58.131 "params": { 00:14:58.131 "trtype": "pcie", 00:14:58.131 "traddr": "0000:00:10.0", 00:14:58.131 "name": "Nvme0" 00:14:58.131 }, 00:14:58.131 "method": "bdev_nvme_attach_controller" 00:14:58.131 }, 00:14:58.131 { 00:14:58.131 "method": "bdev_wait_for_examine" 00:14:58.131 } 00:14:58.131 ] 00:14:58.131 } 00:14:58.131 ] 00:14:58.131 } 00:14:58.131 [2024-05-15 13:52:56.445177] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:58.131 [2024-05-15 13:52:56.445238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:14:58.131 [2024-05-15 13:52:56.586485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.131 [2024-05-15 13:52:56.682843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.647  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:58.647 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:58.647 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:59.211 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:14:59.211 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:59.211 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:59.211 13:52:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:59.211 [2024-05-15 13:52:57.559044] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:59.211 [2024-05-15 13:52:57.559109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62042 ] 00:14:59.211 { 00:14:59.211 "subsystems": [ 00:14:59.211 { 00:14:59.211 "subsystem": "bdev", 00:14:59.211 "config": [ 00:14:59.211 { 00:14:59.211 "params": { 00:14:59.211 "trtype": "pcie", 00:14:59.211 "traddr": "0000:00:10.0", 00:14:59.211 "name": "Nvme0" 00:14:59.211 }, 00:14:59.211 "method": "bdev_nvme_attach_controller" 00:14:59.211 }, 00:14:59.211 { 00:14:59.211 "method": "bdev_wait_for_examine" 00:14:59.211 } 00:14:59.211 ] 00:14:59.211 } 00:14:59.211 ] 00:14:59.211 } 00:14:59.211 [2024-05-15 13:52:57.698998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.469 [2024-05-15 13:52:57.790457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.726  Copying: 56/56 [kB] (average 54 MBps) 00:14:59.726 00:14:59.726 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:14:59.726 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:59.726 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:59.726 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:59.726 [2024-05-15 13:52:58.183549] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:14:59.726 [2024-05-15 13:52:58.183618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62061 ] 00:14:59.726 { 00:14:59.726 "subsystems": [ 00:14:59.726 { 00:14:59.726 "subsystem": "bdev", 00:14:59.726 "config": [ 00:14:59.726 { 00:14:59.726 "params": { 00:14:59.726 "trtype": "pcie", 00:14:59.726 "traddr": "0000:00:10.0", 00:14:59.726 "name": "Nvme0" 00:14:59.726 }, 00:14:59.726 "method": "bdev_nvme_attach_controller" 00:14:59.726 }, 00:14:59.726 { 00:14:59.726 "method": "bdev_wait_for_examine" 00:14:59.726 } 00:14:59.726 ] 00:14:59.726 } 00:14:59.726 ] 00:14:59.726 } 00:14:59.983 [2024-05-15 13:52:58.324965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.983 [2024-05-15 13:52:58.414164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.240  Copying: 56/56 [kB] (average 54 MBps) 00:15:00.240 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:15:00.240 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:00.241 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:00.241 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:00.241 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:00.241 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:00.241 13:52:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:00.498 [2024-05-15 13:52:58.805023] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:00.498 [2024-05-15 13:52:58.805089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62076 ] 00:15:00.498 { 00:15:00.498 "subsystems": [ 00:15:00.498 { 00:15:00.498 "subsystem": "bdev", 00:15:00.498 "config": [ 00:15:00.498 { 00:15:00.498 "params": { 00:15:00.498 "trtype": "pcie", 00:15:00.498 "traddr": "0000:00:10.0", 00:15:00.498 "name": "Nvme0" 00:15:00.498 }, 00:15:00.498 "method": "bdev_nvme_attach_controller" 00:15:00.498 }, 00:15:00.498 { 00:15:00.498 "method": "bdev_wait_for_examine" 00:15:00.498 } 00:15:00.498 ] 00:15:00.498 } 00:15:00.498 ] 00:15:00.498 } 00:15:00.498 [2024-05-15 13:52:58.946049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.498 [2024-05-15 13:52:59.048680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.012  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:01.012 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:01.012 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:01.270 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:15:01.270 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:01.270 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:01.270 13:52:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:01.529 [2024-05-15 13:52:59.837690] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:01.529 { 00:15:01.529 "subsystems": [ 00:15:01.529 { 00:15:01.529 "subsystem": "bdev", 00:15:01.529 "config": [ 00:15:01.529 { 00:15:01.529 "params": { 00:15:01.529 "trtype": "pcie", 00:15:01.529 "traddr": "0000:00:10.0", 00:15:01.529 "name": "Nvme0" 00:15:01.529 }, 00:15:01.529 "method": "bdev_nvme_attach_controller" 00:15:01.529 }, 00:15:01.529 { 00:15:01.529 "method": "bdev_wait_for_examine" 00:15:01.529 } 00:15:01.529 ] 00:15:01.529 } 00:15:01.529 ] 00:15:01.529 } 00:15:01.529 [2024-05-15 13:52:59.837787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62095 ] 00:15:01.529 [2024-05-15 13:52:59.978676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.529 [2024-05-15 13:53:00.076788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.046  Copying: 48/48 [kB] (average 46 MBps) 00:15:02.046 00:15:02.046 13:53:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:15:02.046 13:53:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:02.046 13:53:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:02.046 13:53:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:02.046 [2024-05-15 13:53:00.448240] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:02.046 [2024-05-15 13:53:00.448304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62110 ] 00:15:02.046 { 00:15:02.046 "subsystems": [ 00:15:02.046 { 00:15:02.046 "subsystem": "bdev", 00:15:02.046 "config": [ 00:15:02.046 { 00:15:02.046 "params": { 00:15:02.046 "trtype": "pcie", 00:15:02.046 "traddr": "0000:00:10.0", 00:15:02.046 "name": "Nvme0" 00:15:02.046 }, 00:15:02.046 "method": "bdev_nvme_attach_controller" 00:15:02.046 }, 00:15:02.046 { 00:15:02.046 "method": "bdev_wait_for_examine" 00:15:02.046 } 00:15:02.046 ] 00:15:02.046 } 00:15:02.046 ] 00:15:02.046 } 00:15:02.046 [2024-05-15 13:53:00.588691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.305 [2024-05-15 13:53:00.682070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.564  Copying: 48/48 [kB] (average 46 MBps) 00:15:02.564 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:02.564 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:02.564 [2024-05-15 13:53:01.082048] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:02.564 [2024-05-15 13:53:01.082113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62130 ] 00:15:02.564 { 00:15:02.564 "subsystems": [ 00:15:02.564 { 00:15:02.564 "subsystem": "bdev", 00:15:02.564 "config": [ 00:15:02.564 { 00:15:02.564 "params": { 00:15:02.564 "trtype": "pcie", 00:15:02.564 "traddr": "0000:00:10.0", 00:15:02.564 "name": "Nvme0" 00:15:02.564 }, 00:15:02.564 "method": "bdev_nvme_attach_controller" 00:15:02.564 }, 00:15:02.564 { 00:15:02.564 "method": "bdev_wait_for_examine" 00:15:02.564 } 00:15:02.564 ] 00:15:02.564 } 00:15:02.564 ] 00:15:02.564 } 00:15:02.823 [2024-05-15 13:53:01.223584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.823 [2024-05-15 13:53:01.317207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.341  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:03.341 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:03.341 13:53:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:03.600 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:15:03.600 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:03.600 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:03.600 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:03.600 { 00:15:03.600 "subsystems": [ 00:15:03.600 { 00:15:03.600 "subsystem": "bdev", 00:15:03.600 "config": [ 00:15:03.600 { 00:15:03.600 "params": { 00:15:03.600 "trtype": "pcie", 00:15:03.600 "traddr": "0000:00:10.0", 00:15:03.600 "name": "Nvme0" 00:15:03.600 }, 00:15:03.600 "method": "bdev_nvme_attach_controller" 00:15:03.600 }, 00:15:03.600 { 00:15:03.600 "method": "bdev_wait_for_examine" 00:15:03.600 } 00:15:03.600 ] 00:15:03.600 } 00:15:03.600 ] 00:15:03.600 } 00:15:03.600 [2024-05-15 13:53:02.128346] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:03.600 [2024-05-15 13:53:02.128414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62150 ] 00:15:03.858 [2024-05-15 13:53:02.270208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.858 [2024-05-15 13:53:02.363559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.374  Copying: 48/48 [kB] (average 46 MBps) 00:15:04.374 00:15:04.374 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:15:04.374 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:04.374 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:04.374 13:53:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 [2024-05-15 13:53:02.750022] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:04.374 [2024-05-15 13:53:02.750098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62158 ] 00:15:04.374 { 00:15:04.374 "subsystems": [ 00:15:04.374 { 00:15:04.374 "subsystem": "bdev", 00:15:04.374 "config": [ 00:15:04.374 { 00:15:04.374 "params": { 00:15:04.374 "trtype": "pcie", 00:15:04.374 "traddr": "0000:00:10.0", 00:15:04.374 "name": "Nvme0" 00:15:04.374 }, 00:15:04.374 "method": "bdev_nvme_attach_controller" 00:15:04.374 }, 00:15:04.374 { 00:15:04.375 "method": "bdev_wait_for_examine" 00:15:04.375 } 00:15:04.375 ] 00:15:04.375 } 00:15:04.375 ] 00:15:04.375 } 00:15:04.375 [2024-05-15 13:53:02.886599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.634 [2024-05-15 13:53:02.994990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.893  Copying: 48/48 [kB] (average 46 MBps) 00:15:04.893 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:04.893 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 [2024-05-15 13:53:03.401530] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:04.893 [2024-05-15 13:53:03.401611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:15:04.893 { 00:15:04.893 "subsystems": [ 00:15:04.893 { 00:15:04.893 "subsystem": "bdev", 00:15:04.893 "config": [ 00:15:04.893 { 00:15:04.893 "params": { 00:15:04.893 "trtype": "pcie", 00:15:04.893 "traddr": "0000:00:10.0", 00:15:04.893 "name": "Nvme0" 00:15:04.893 }, 00:15:04.893 "method": "bdev_nvme_attach_controller" 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_wait_for_examine" 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 } 00:15:05.152 [2024-05-15 13:53:03.542473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.152 [2024-05-15 13:53:03.633174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.671  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:05.671 00:15:05.671 00:15:05.671 real 0m14.330s 00:15:05.671 user 0m10.462s 00:15:05.671 sys 0m4.917s 00:15:05.671 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:05.671 13:53:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:05.671 ************************************ 00:15:05.671 END TEST dd_rw 00:15:05.671 ************************************ 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:05.671 ************************************ 00:15:05.671 START TEST dd_rw_offset 00:15:05.671 ************************************ 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=dac2pdmc73a569ojpn0za6opog8suazs005v50d3eq0o8rv4e0i12zrnegkti8bq8jcjznjdrh7slvtt3mgab9sja86ecqpnd8qw3zimsajscckhd8dhekl5phne1jq0yvnjwpci2pkq79tp2zajhsdvubn7hrth7wegchrr3li061srysmpzji83t34f97umk446od2cjh3dupbu8gprplw2e5btdnm3pxyyo80jba2ypauap6x209kphsscvpq3lf9ti0izchbqd8mabe3a0f8j7dcuaxjm7485pd4trtdmunlxzj3cbgwg3kmnpxoky0rde6d65bt7gn1evuhjrqb5tf2wfgd0tpa6x9bmzakvzyyfffy1btuxdthe8cbq4ckzwmi694tnmqasetopag30h2nbp10mpr1a84ndb4lxuxh9qkpjddv01cqsu4oc6orblepp9qiv0p9ivwqft7qvmwiprjzk926dm3ftsuqdllbil767bq6ujml3sfozwic0ulfu338yllxjupd638crz812wvj387m3xuxbwbtqpqxw6yzcprk1g7rpr4dc9xbgurtxxgh65frosbefiilzfjcx5f9bl3nflmma9ir7dut59ylm289rokf3vj87tzonawvg5urxeia4d6hlgcfurh5uwg2xwib4b57bu4804yzqwsxqwxvk766k6fxnpbsklrdnv7o51mehlvk9f2vre1apcvlllt2lfiin1w0brfqhgcc8kg8y6fj3srlw08hiev7wh6ilx3zssjsj1yoflqo89l6ogc60v9oztckbo7nktchjteb9jb6527kgpisi19jgawqwafpk4z8au4icexs2vttinz03aij55np0vx1ae4ciykar6d1sa88ovubkn8l32lpnfi9w186uzvfzlkow7u804ua0c1ywfck2la4xe8f75g8htcl96n6y524fr4jxyzo1x3trvsnqnah4l6wb0u54fywzfkzhr21dcq4vtq47tostbnwuh3w2tm9jphsaxtxkbbgxykiedzfcbnmm9hy6l7nyf86yu9cz713cj1a9rr34wv9j3y8knn6vkvpigrl3gdqn6wbty705d1igfmzutgyf68squcd0ac57djea2ujxk10wqyjzahdosufbqs0u5p8ocyzd6inbibbnwo6fh82wwat1ec8dc1p9ijeytsh2nhkaup3s2tp7jih9av8501ij3hwrr708tel91izcvcxo620zrj8unft9pfn3xpefn0r7un91zp2r19lqy1xwylyll2aryhxzetk1y85xj7fv5wv55kn13h1gf3kc0qjjy7y7tmdf5mkd1k9xdq1eak6wui8pqfqtenbahg17khneah6yanv4geh9pn6c7qazpnebx2szt431uxx6s8ltt5ia0avcxz00j4p2nhdvmu4qqfgid94ucejejlgiglfc1ptynhppmrsqri0lkekaztsghs85c7vgyb5mblab8hh7z18ys143422j7s11e4tc0fh1g4nfevaduuoohwegedbwgmw3y818f3hdemrbiogjitq27fwoijmf5q18ur0cy2a6u2012cn8nxdrglvz8o301iyexggal44b9whpfauirotrjo9036sdt7chxssgphpnac2gj344ka10jtirozmibhszpka9cv65aezy5niytd452owiv0073qlx53blxuojiqmnur6acpens73swb3d3a5lcr329gzs3xdi4kl2113pd2018h59zxr87f50yci0g2dp1wajinggapasuzvxf9ca52b7a8ild0sgvgqj8b17jxqquaxzz0qkm1ju3y3eifhq7nyhnmsgplrm3y44utxf9jgk9r37y0ky8t1ogzxma4jxtpbz3rc2qldmflnwi90ed2ou6h89evnd7lg6nqeyr9vyfimiv7kgfbzqe6md6qsnnn9wrthkzonuvzs698laubqbmpqpqgjlg3z2lq3pq2rt2fenf9lcacdbb9qu3ooijtw6w41rwig84ghljmzdbsg1gqfweytjw9yg3p8d36jbblxdrx09cgxnht6h12y9zjx1w26nt8sewbf25jn3u30nvbj5ubxnmhxctig7j9wak1xdpke52pjxtznseoz9j1vuzs0zijhslu4o2ywzp4l425sbozh5ugngk9c5vi1w6krqxzyehnhzrvz1t39fq315rkxhybkxerty7896x0j5ohwmrij8gh99y5x9d5l4nbs2tbmmw2wr8bgagbdbgzw6ho3mx5kbjrcfozm6jg92wfk1flltcesultmb0j64gzoon3k32nbvhlub7tp306rc3m5bsjarzw7nmq1jtoz72ximx45lka2ek9a7fp80lz01r5orn1ef1qfgvi9lvuy8uviqsxf2ibgcxffhp79urcee8jna1va8j8s2dow1nochhjdkur918vasrf4qde2r8ec658xdek4d5oxj6ywqcy0dp813vr6dk14djvl7vj9n6cgkaeymxnz1v8qb1geckysf1urn2i3ds843p970frynwk213b0oqgowzuqlkjh6s7eqx997ivq7kaadg7uv1568oboxb6qunqq0y7k3as11fjrxdtqv0vhw8k2vd45le4zoket4btziytqbm4lpdvytf2t2fy9eyzfq4jvxhnvhe5frsy6swgmzoux1dqokeo33kw7gm491d51px80bpt9gj6m4a6yhuqb38baqno6nv3l0i6wldxuoy1ig46zts349pmewe870qexy6ziycomw3xlglt9qojcg6zz3vk9muugcdoc7c2c4gyrxz5q50g7gge75wr9rrmwt1txrarm83ancq2c0dxjgza0fwstqv4cnesah9mdpismuz2d1ekyimpyrn9ue8sysnyjwqi61f8swk4me8jb393y26xxz84hzd8ncgadggqvq4j3hbb3wopcgjtrtto7d23yc35p1ynlpitpseekoovls1d0mvtd3h788vvoj50yyoiamairyeq12zmjviwjzslgeuwiemy686apq32l84it0ensmqe8pv0jsd3z2dya7lhhrm5szr8pbo2d4lfykp8hdmjaxfjvmlidxr01bmiigwa5p2azxzbtk4b00wq134jjkf5kwm9mhtd69oa7a0uivsbmqhtwdcyezsv88vsz64kv7r9n66aa4uwtu1gixgnqketg4f5lwlk3f83s1fjay6fb0cmlajy88hp4rwbiw0i8vz3vvbin6o5p46s3toyzklysrn5zgylg05hnrar70v5tphsltnashutytemu8dsyp4q9y9dsvxtkrlilyxhirug6i9muz5bmekw57ikaeuz3380rcc5g8t7q1v4leocs7w8kwkki264ddm5iewrvmg8bwt905eg86pyir94eh8tz3jd8zl5cbuajclim5h6hjv8j3ftd21swfkb9gibab5u3z4grgwv2zhe5aa6h9uuuduyftcguiyiy24h1w5jifsiehqlqavd7qvrrh245p60kq72l6adtw5jkczlznu6ueuv3glf7gzguv3k13hsqy881g9ztd6yy442r6gz25rz17gxm203enfw4qr6xgibc1yv9wfljokwv7ktumf50xz8dv5sal9a3okqpedpxk33y8y7t5ifl77rm8mfc9u4cw10uldi4ki0x7ekp8b5ufxdsx233qcvxfwzkce174hipzn5hn7xsoa8mdwsk9a2pahhjq1fxcjap8cydvnz0s08hrhcgw6nr1hvg674mnz4l5cda853uc2nfmdxj9951lzvopvfpn644rzfakik4edb6h2223a5dsukx4wb1lyhk0ro63dm64nt8l9zvf2rcbebs05hgrnjhaebq5qnpkv8aeqkoaj9xjsr6fhdeqzwzhs9xubrig7ucnx2nvzpzyduh7d8ebl4un3marvovkg4zsnki7ga4a77iz2yurqgzdrhqpo0gtcybv6q8spto31rs5lc7jxe4zkgd16ge24q05fu09pj85fi4j6ffy1yhevpvboteswqd7akq486vh5lug1rs3fj3 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:15:05.671 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:05.671 [2024-05-15 13:53:04.134631] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:05.671 [2024-05-15 13:53:04.134712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62208 ] 00:15:05.671 { 00:15:05.671 "subsystems": [ 00:15:05.671 { 00:15:05.671 "subsystem": "bdev", 00:15:05.671 "config": [ 00:15:05.671 { 00:15:05.671 "params": { 00:15:05.671 "trtype": "pcie", 00:15:05.671 "traddr": "0000:00:10.0", 00:15:05.671 "name": "Nvme0" 00:15:05.671 }, 00:15:05.671 "method": "bdev_nvme_attach_controller" 00:15:05.671 }, 00:15:05.671 { 00:15:05.671 "method": "bdev_wait_for_examine" 00:15:05.671 } 00:15:05.671 ] 00:15:05.671 } 00:15:05.671 ] 00:15:05.671 } 00:15:05.930 [2024-05-15 13:53:04.280152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.930 [2024-05-15 13:53:04.370528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.189  Copying: 4096/4096 [B] (average 4000 kBps) 00:15:06.189 00:15:06.189 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:15:06.189 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:15:06.189 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:15:06.189 13:53:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:06.448 [2024-05-15 13:53:04.761358] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:06.448 [2024-05-15 13:53:04.761826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62223 ] 00:15:06.448 { 00:15:06.448 "subsystems": [ 00:15:06.448 { 00:15:06.448 "subsystem": "bdev", 00:15:06.448 "config": [ 00:15:06.448 { 00:15:06.448 "params": { 00:15:06.448 "trtype": "pcie", 00:15:06.448 "traddr": "0000:00:10.0", 00:15:06.448 "name": "Nvme0" 00:15:06.448 }, 00:15:06.448 "method": "bdev_nvme_attach_controller" 00:15:06.448 }, 00:15:06.448 { 00:15:06.448 "method": "bdev_wait_for_examine" 00:15:06.448 } 00:15:06.448 ] 00:15:06.448 } 00:15:06.448 ] 00:15:06.448 } 00:15:06.448 [2024-05-15 13:53:04.902770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.448 [2024-05-15 13:53:05.001475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.967  Copying: 4096/4096 [B] (average 4000 kBps) 00:15:06.967 00:15:06.967 13:53:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ dac2pdmc73a569ojpn0za6opog8suazs005v50d3eq0o8rv4e0i12zrnegkti8bq8jcjznjdrh7slvtt3mgab9sja86ecqpnd8qw3zimsajscckhd8dhekl5phne1jq0yvnjwpci2pkq79tp2zajhsdvubn7hrth7wegchrr3li061srysmpzji83t34f97umk446od2cjh3dupbu8gprplw2e5btdnm3pxyyo80jba2ypauap6x209kphsscvpq3lf9ti0izchbqd8mabe3a0f8j7dcuaxjm7485pd4trtdmunlxzj3cbgwg3kmnpxoky0rde6d65bt7gn1evuhjrqb5tf2wfgd0tpa6x9bmzakvzyyfffy1btuxdthe8cbq4ckzwmi694tnmqasetopag30h2nbp10mpr1a84ndb4lxuxh9qkpjddv01cqsu4oc6orblepp9qiv0p9ivwqft7qvmwiprjzk926dm3ftsuqdllbil767bq6ujml3sfozwic0ulfu338yllxjupd638crz812wvj387m3xuxbwbtqpqxw6yzcprk1g7rpr4dc9xbgurtxxgh65frosbefiilzfjcx5f9bl3nflmma9ir7dut59ylm289rokf3vj87tzonawvg5urxeia4d6hlgcfurh5uwg2xwib4b57bu4804yzqwsxqwxvk766k6fxnpbsklrdnv7o51mehlvk9f2vre1apcvlllt2lfiin1w0brfqhgcc8kg8y6fj3srlw08hiev7wh6ilx3zssjsj1yoflqo89l6ogc60v9oztckbo7nktchjteb9jb6527kgpisi19jgawqwafpk4z8au4icexs2vttinz03aij55np0vx1ae4ciykar6d1sa88ovubkn8l32lpnfi9w186uzvfzlkow7u804ua0c1ywfck2la4xe8f75g8htcl96n6y524fr4jxyzo1x3trvsnqnah4l6wb0u54fywzfkzhr21dcq4vtq47tostbnwuh3w2tm9jphsaxtxkbbgxykiedzfcbnmm9hy6l7nyf86yu9cz713cj1a9rr34wv9j3y8knn6vkvpigrl3gdqn6wbty705d1igfmzutgyf68squcd0ac57djea2ujxk10wqyjzahdosufbqs0u5p8ocyzd6inbibbnwo6fh82wwat1ec8dc1p9ijeytsh2nhkaup3s2tp7jih9av8501ij3hwrr708tel91izcvcxo620zrj8unft9pfn3xpefn0r7un91zp2r19lqy1xwylyll2aryhxzetk1y85xj7fv5wv55kn13h1gf3kc0qjjy7y7tmdf5mkd1k9xdq1eak6wui8pqfqtenbahg17khneah6yanv4geh9pn6c7qazpnebx2szt431uxx6s8ltt5ia0avcxz00j4p2nhdvmu4qqfgid94ucejejlgiglfc1ptynhppmrsqri0lkekaztsghs85c7vgyb5mblab8hh7z18ys143422j7s11e4tc0fh1g4nfevaduuoohwegedbwgmw3y818f3hdemrbiogjitq27fwoijmf5q18ur0cy2a6u2012cn8nxdrglvz8o301iyexggal44b9whpfauirotrjo9036sdt7chxssgphpnac2gj344ka10jtirozmibhszpka9cv65aezy5niytd452owiv0073qlx53blxuojiqmnur6acpens73swb3d3a5lcr329gzs3xdi4kl2113pd2018h59zxr87f50yci0g2dp1wajinggapasuzvxf9ca52b7a8ild0sgvgqj8b17jxqquaxzz0qkm1ju3y3eifhq7nyhnmsgplrm3y44utxf9jgk9r37y0ky8t1ogzxma4jxtpbz3rc2qldmflnwi90ed2ou6h89evnd7lg6nqeyr9vyfimiv7kgfbzqe6md6qsnnn9wrthkzonuvzs698laubqbmpqpqgjlg3z2lq3pq2rt2fenf9lcacdbb9qu3ooijtw6w41rwig84ghljmzdbsg1gqfweytjw9yg3p8d36jbblxdrx09cgxnht6h12y9zjx1w26nt8sewbf25jn3u30nvbj5ubxnmhxctig7j9wak1xdpke52pjxtznseoz9j1vuzs0zijhslu4o2ywzp4l425sbozh5ugngk9c5vi1w6krqxzyehnhzrvz1t39fq315rkxhybkxerty7896x0j5ohwmrij8gh99y5x9d5l4nbs2tbmmw2wr8bgagbdbgzw6ho3mx5kbjrcfozm6jg92wfk1flltcesultmb0j64gzoon3k32nbvhlub7tp306rc3m5bsjarzw7nmq1jtoz72ximx45lka2ek9a7fp80lz01r5orn1ef1qfgvi9lvuy8uviqsxf2ibgcxffhp79urcee8jna1va8j8s2dow1nochhjdkur918vasrf4qde2r8ec658xdek4d5oxj6ywqcy0dp813vr6dk14djvl7vj9n6cgkaeymxnz1v8qb1geckysf1urn2i3ds843p970frynwk213b0oqgowzuqlkjh6s7eqx997ivq7kaadg7uv1568oboxb6qunqq0y7k3as11fjrxdtqv0vhw8k2vd45le4zoket4btziytqbm4lpdvytf2t2fy9eyzfq4jvxhnvhe5frsy6swgmzoux1dqokeo33kw7gm491d51px80bpt9gj6m4a6yhuqb38baqno6nv3l0i6wldxuoy1ig46zts349pmewe870qexy6ziycomw3xlglt9qojcg6zz3vk9muugcdoc7c2c4gyrxz5q50g7gge75wr9rrmwt1txrarm83ancq2c0dxjgza0fwstqv4cnesah9mdpismuz2d1ekyimpyrn9ue8sysnyjwqi61f8swk4me8jb393y26xxz84hzd8ncgadggqvq4j3hbb3wopcgjtrtto7d23yc35p1ynlpitpseekoovls1d0mvtd3h788vvoj50yyoiamairyeq12zmjviwjzslgeuwiemy686apq32l84it0ensmqe8pv0jsd3z2dya7lhhrm5szr8pbo2d4lfykp8hdmjaxfjvmlidxr01bmiigwa5p2azxzbtk4b00wq134jjkf5kwm9mhtd69oa7a0uivsbmqhtwdcyezsv88vsz64kv7r9n66aa4uwtu1gixgnqketg4f5lwlk3f83s1fjay6fb0cmlajy88hp4rwbiw0i8vz3vvbin6o5p46s3toyzklysrn5zgylg05hnrar70v5tphsltnashutytemu8dsyp4q9y9dsvxtkrlilyxhirug6i9muz5bmekw57ikaeuz3380rcc5g8t7q1v4leocs7w8kwkki264ddm5iewrvmg8bwt905eg86pyir94eh8tz3jd8zl5cbuajclim5h6hjv8j3ftd21swfkb9gibab5u3z4grgwv2zhe5aa6h9uuuduyftcguiyiy24h1w5jifsiehqlqavd7qvrrh245p60kq72l6adtw5jkczlznu6ueuv3glf7gzguv3k13hsqy881g9ztd6yy442r6gz25rz17gxm203enfw4qr6xgibc1yv9wfljokwv7ktumf50xz8dv5sal9a3okqpedpxk33y8y7t5ifl77rm8mfc9u4cw10uldi4ki0x7ekp8b5ufxdsx233qcvxfwzkce174hipzn5hn7xsoa8mdwsk9a2pahhjq1fxcjap8cydvnz0s08hrhcgw6nr1hvg674mnz4l5cda853uc2nfmdxj9951lzvopvfpn644rzfakik4edb6h2223a5dsukx4wb1lyhk0ro63dm64nt8l9zvf2rcbebs05hgrnjhaebq5qnpkv8aeqkoaj9xjsr6fhdeqzwzhs9xubrig7ucnx2nvzpzyduh7d8ebl4un3marvovkg4zsnki7ga4a77iz2yurqgzdrhqpo0gtcybv6q8spto31rs5lc7jxe4zkgd16ge24q05fu09pj85fi4j6ffy1yhevpvboteswqd7akq486vh5lug1rs3fj3 == \d\a\c\2\p\d\m\c\7\3\a\5\6\9\o\j\p\n\0\z\a\6\o\p\o\g\8\s\u\a\z\s\0\0\5\v\5\0\d\3\e\q\0\o\8\r\v\4\e\0\i\1\2\z\r\n\e\g\k\t\i\8\b\q\8\j\c\j\z\n\j\d\r\h\7\s\l\v\t\t\3\m\g\a\b\9\s\j\a\8\6\e\c\q\p\n\d\8\q\w\3\z\i\m\s\a\j\s\c\c\k\h\d\8\d\h\e\k\l\5\p\h\n\e\1\j\q\0\y\v\n\j\w\p\c\i\2\p\k\q\7\9\t\p\2\z\a\j\h\s\d\v\u\b\n\7\h\r\t\h\7\w\e\g\c\h\r\r\3\l\i\0\6\1\s\r\y\s\m\p\z\j\i\8\3\t\3\4\f\9\7\u\m\k\4\4\6\o\d\2\c\j\h\3\d\u\p\b\u\8\g\p\r\p\l\w\2\e\5\b\t\d\n\m\3\p\x\y\y\o\8\0\j\b\a\2\y\p\a\u\a\p\6\x\2\0\9\k\p\h\s\s\c\v\p\q\3\l\f\9\t\i\0\i\z\c\h\b\q\d\8\m\a\b\e\3\a\0\f\8\j\7\d\c\u\a\x\j\m\7\4\8\5\p\d\4\t\r\t\d\m\u\n\l\x\z\j\3\c\b\g\w\g\3\k\m\n\p\x\o\k\y\0\r\d\e\6\d\6\5\b\t\7\g\n\1\e\v\u\h\j\r\q\b\5\t\f\2\w\f\g\d\0\t\p\a\6\x\9\b\m\z\a\k\v\z\y\y\f\f\f\y\1\b\t\u\x\d\t\h\e\8\c\b\q\4\c\k\z\w\m\i\6\9\4\t\n\m\q\a\s\e\t\o\p\a\g\3\0\h\2\n\b\p\1\0\m\p\r\1\a\8\4\n\d\b\4\l\x\u\x\h\9\q\k\p\j\d\d\v\0\1\c\q\s\u\4\o\c\6\o\r\b\l\e\p\p\9\q\i\v\0\p\9\i\v\w\q\f\t\7\q\v\m\w\i\p\r\j\z\k\9\2\6\d\m\3\f\t\s\u\q\d\l\l\b\i\l\7\6\7\b\q\6\u\j\m\l\3\s\f\o\z\w\i\c\0\u\l\f\u\3\3\8\y\l\l\x\j\u\p\d\6\3\8\c\r\z\8\1\2\w\v\j\3\8\7\m\3\x\u\x\b\w\b\t\q\p\q\x\w\6\y\z\c\p\r\k\1\g\7\r\p\r\4\d\c\9\x\b\g\u\r\t\x\x\g\h\6\5\f\r\o\s\b\e\f\i\i\l\z\f\j\c\x\5\f\9\b\l\3\n\f\l\m\m\a\9\i\r\7\d\u\t\5\9\y\l\m\2\8\9\r\o\k\f\3\v\j\8\7\t\z\o\n\a\w\v\g\5\u\r\x\e\i\a\4\d\6\h\l\g\c\f\u\r\h\5\u\w\g\2\x\w\i\b\4\b\5\7\b\u\4\8\0\4\y\z\q\w\s\x\q\w\x\v\k\7\6\6\k\6\f\x\n\p\b\s\k\l\r\d\n\v\7\o\5\1\m\e\h\l\v\k\9\f\2\v\r\e\1\a\p\c\v\l\l\l\t\2\l\f\i\i\n\1\w\0\b\r\f\q\h\g\c\c\8\k\g\8\y\6\f\j\3\s\r\l\w\0\8\h\i\e\v\7\w\h\6\i\l\x\3\z\s\s\j\s\j\1\y\o\f\l\q\o\8\9\l\6\o\g\c\6\0\v\9\o\z\t\c\k\b\o\7\n\k\t\c\h\j\t\e\b\9\j\b\6\5\2\7\k\g\p\i\s\i\1\9\j\g\a\w\q\w\a\f\p\k\4\z\8\a\u\4\i\c\e\x\s\2\v\t\t\i\n\z\0\3\a\i\j\5\5\n\p\0\v\x\1\a\e\4\c\i\y\k\a\r\6\d\1\s\a\8\8\o\v\u\b\k\n\8\l\3\2\l\p\n\f\i\9\w\1\8\6\u\z\v\f\z\l\k\o\w\7\u\8\0\4\u\a\0\c\1\y\w\f\c\k\2\l\a\4\x\e\8\f\7\5\g\8\h\t\c\l\9\6\n\6\y\5\2\4\f\r\4\j\x\y\z\o\1\x\3\t\r\v\s\n\q\n\a\h\4\l\6\w\b\0\u\5\4\f\y\w\z\f\k\z\h\r\2\1\d\c\q\4\v\t\q\4\7\t\o\s\t\b\n\w\u\h\3\w\2\t\m\9\j\p\h\s\a\x\t\x\k\b\b\g\x\y\k\i\e\d\z\f\c\b\n\m\m\9\h\y\6\l\7\n\y\f\8\6\y\u\9\c\z\7\1\3\c\j\1\a\9\r\r\3\4\w\v\9\j\3\y\8\k\n\n\6\v\k\v\p\i\g\r\l\3\g\d\q\n\6\w\b\t\y\7\0\5\d\1\i\g\f\m\z\u\t\g\y\f\6\8\s\q\u\c\d\0\a\c\5\7\d\j\e\a\2\u\j\x\k\1\0\w\q\y\j\z\a\h\d\o\s\u\f\b\q\s\0\u\5\p\8\o\c\y\z\d\6\i\n\b\i\b\b\n\w\o\6\f\h\8\2\w\w\a\t\1\e\c\8\d\c\1\p\9\i\j\e\y\t\s\h\2\n\h\k\a\u\p\3\s\2\t\p\7\j\i\h\9\a\v\8\5\0\1\i\j\3\h\w\r\r\7\0\8\t\e\l\9\1\i\z\c\v\c\x\o\6\2\0\z\r\j\8\u\n\f\t\9\p\f\n\3\x\p\e\f\n\0\r\7\u\n\9\1\z\p\2\r\1\9\l\q\y\1\x\w\y\l\y\l\l\2\a\r\y\h\x\z\e\t\k\1\y\8\5\x\j\7\f\v\5\w\v\5\5\k\n\1\3\h\1\g\f\3\k\c\0\q\j\j\y\7\y\7\t\m\d\f\5\m\k\d\1\k\9\x\d\q\1\e\a\k\6\w\u\i\8\p\q\f\q\t\e\n\b\a\h\g\1\7\k\h\n\e\a\h\6\y\a\n\v\4\g\e\h\9\p\n\6\c\7\q\a\z\p\n\e\b\x\2\s\z\t\4\3\1\u\x\x\6\s\8\l\t\t\5\i\a\0\a\v\c\x\z\0\0\j\4\p\2\n\h\d\v\m\u\4\q\q\f\g\i\d\9\4\u\c\e\j\e\j\l\g\i\g\l\f\c\1\p\t\y\n\h\p\p\m\r\s\q\r\i\0\l\k\e\k\a\z\t\s\g\h\s\8\5\c\7\v\g\y\b\5\m\b\l\a\b\8\h\h\7\z\1\8\y\s\1\4\3\4\2\2\j\7\s\1\1\e\4\t\c\0\f\h\1\g\4\n\f\e\v\a\d\u\u\o\o\h\w\e\g\e\d\b\w\g\m\w\3\y\8\1\8\f\3\h\d\e\m\r\b\i\o\g\j\i\t\q\2\7\f\w\o\i\j\m\f\5\q\1\8\u\r\0\c\y\2\a\6\u\2\0\1\2\c\n\8\n\x\d\r\g\l\v\z\8\o\3\0\1\i\y\e\x\g\g\a\l\4\4\b\9\w\h\p\f\a\u\i\r\o\t\r\j\o\9\0\3\6\s\d\t\7\c\h\x\s\s\g\p\h\p\n\a\c\2\g\j\3\4\4\k\a\1\0\j\t\i\r\o\z\m\i\b\h\s\z\p\k\a\9\c\v\6\5\a\e\z\y\5\n\i\y\t\d\4\5\2\o\w\i\v\0\0\7\3\q\l\x\5\3\b\l\x\u\o\j\i\q\m\n\u\r\6\a\c\p\e\n\s\7\3\s\w\b\3\d\3\a\5\l\c\r\3\2\9\g\z\s\3\x\d\i\4\k\l\2\1\1\3\p\d\2\0\1\8\h\5\9\z\x\r\8\7\f\5\0\y\c\i\0\g\2\d\p\1\w\a\j\i\n\g\g\a\p\a\s\u\z\v\x\f\9\c\a\5\2\b\7\a\8\i\l\d\0\s\g\v\g\q\j\8\b\1\7\j\x\q\q\u\a\x\z\z\0\q\k\m\1\j\u\3\y\3\e\i\f\h\q\7\n\y\h\n\m\s\g\p\l\r\m\3\y\4\4\u\t\x\f\9\j\g\k\9\r\3\7\y\0\k\y\8\t\1\o\g\z\x\m\a\4\j\x\t\p\b\z\3\r\c\2\q\l\d\m\f\l\n\w\i\9\0\e\d\2\o\u\6\h\8\9\e\v\n\d\7\l\g\6\n\q\e\y\r\9\v\y\f\i\m\i\v\7\k\g\f\b\z\q\e\6\m\d\6\q\s\n\n\n\9\w\r\t\h\k\z\o\n\u\v\z\s\6\9\8\l\a\u\b\q\b\m\p\q\p\q\g\j\l\g\3\z\2\l\q\3\p\q\2\r\t\2\f\e\n\f\9\l\c\a\c\d\b\b\9\q\u\3\o\o\i\j\t\w\6\w\4\1\r\w\i\g\8\4\g\h\l\j\m\z\d\b\s\g\1\g\q\f\w\e\y\t\j\w\9\y\g\3\p\8\d\3\6\j\b\b\l\x\d\r\x\0\9\c\g\x\n\h\t\6\h\1\2\y\9\z\j\x\1\w\2\6\n\t\8\s\e\w\b\f\2\5\j\n\3\u\3\0\n\v\b\j\5\u\b\x\n\m\h\x\c\t\i\g\7\j\9\w\a\k\1\x\d\p\k\e\5\2\p\j\x\t\z\n\s\e\o\z\9\j\1\v\u\z\s\0\z\i\j\h\s\l\u\4\o\2\y\w\z\p\4\l\4\2\5\s\b\o\z\h\5\u\g\n\g\k\9\c\5\v\i\1\w\6\k\r\q\x\z\y\e\h\n\h\z\r\v\z\1\t\3\9\f\q\3\1\5\r\k\x\h\y\b\k\x\e\r\t\y\7\8\9\6\x\0\j\5\o\h\w\m\r\i\j\8\g\h\9\9\y\5\x\9\d\5\l\4\n\b\s\2\t\b\m\m\w\2\w\r\8\b\g\a\g\b\d\b\g\z\w\6\h\o\3\m\x\5\k\b\j\r\c\f\o\z\m\6\j\g\9\2\w\f\k\1\f\l\l\t\c\e\s\u\l\t\m\b\0\j\6\4\g\z\o\o\n\3\k\3\2\n\b\v\h\l\u\b\7\t\p\3\0\6\r\c\3\m\5\b\s\j\a\r\z\w\7\n\m\q\1\j\t\o\z\7\2\x\i\m\x\4\5\l\k\a\2\e\k\9\a\7\f\p\8\0\l\z\0\1\r\5\o\r\n\1\e\f\1\q\f\g\v\i\9\l\v\u\y\8\u\v\i\q\s\x\f\2\i\b\g\c\x\f\f\h\p\7\9\u\r\c\e\e\8\j\n\a\1\v\a\8\j\8\s\2\d\o\w\1\n\o\c\h\h\j\d\k\u\r\9\1\8\v\a\s\r\f\4\q\d\e\2\r\8\e\c\6\5\8\x\d\e\k\4\d\5\o\x\j\6\y\w\q\c\y\0\d\p\8\1\3\v\r\6\d\k\1\4\d\j\v\l\7\v\j\9\n\6\c\g\k\a\e\y\m\x\n\z\1\v\8\q\b\1\g\e\c\k\y\s\f\1\u\r\n\2\i\3\d\s\8\4\3\p\9\7\0\f\r\y\n\w\k\2\1\3\b\0\o\q\g\o\w\z\u\q\l\k\j\h\6\s\7\e\q\x\9\9\7\i\v\q\7\k\a\a\d\g\7\u\v\1\5\6\8\o\b\o\x\b\6\q\u\n\q\q\0\y\7\k\3\a\s\1\1\f\j\r\x\d\t\q\v\0\v\h\w\8\k\2\v\d\4\5\l\e\4\z\o\k\e\t\4\b\t\z\i\y\t\q\b\m\4\l\p\d\v\y\t\f\2\t\2\f\y\9\e\y\z\f\q\4\j\v\x\h\n\v\h\e\5\f\r\s\y\6\s\w\g\m\z\o\u\x\1\d\q\o\k\e\o\3\3\k\w\7\g\m\4\9\1\d\5\1\p\x\8\0\b\p\t\9\g\j\6\m\4\a\6\y\h\u\q\b\3\8\b\a\q\n\o\6\n\v\3\l\0\i\6\w\l\d\x\u\o\y\1\i\g\4\6\z\t\s\3\4\9\p\m\e\w\e\8\7\0\q\e\x\y\6\z\i\y\c\o\m\w\3\x\l\g\l\t\9\q\o\j\c\g\6\z\z\3\v\k\9\m\u\u\g\c\d\o\c\7\c\2\c\4\g\y\r\x\z\5\q\5\0\g\7\g\g\e\7\5\w\r\9\r\r\m\w\t\1\t\x\r\a\r\m\8\3\a\n\c\q\2\c\0\d\x\j\g\z\a\0\f\w\s\t\q\v\4\c\n\e\s\a\h\9\m\d\p\i\s\m\u\z\2\d\1\e\k\y\i\m\p\y\r\n\9\u\e\8\s\y\s\n\y\j\w\q\i\6\1\f\8\s\w\k\4\m\e\8\j\b\3\9\3\y\2\6\x\x\z\8\4\h\z\d\8\n\c\g\a\d\g\g\q\v\q\4\j\3\h\b\b\3\w\o\p\c\g\j\t\r\t\t\o\7\d\2\3\y\c\3\5\p\1\y\n\l\p\i\t\p\s\e\e\k\o\o\v\l\s\1\d\0\m\v\t\d\3\h\7\8\8\v\v\o\j\5\0\y\y\o\i\a\m\a\i\r\y\e\q\1\2\z\m\j\v\i\w\j\z\s\l\g\e\u\w\i\e\m\y\6\8\6\a\p\q\3\2\l\8\4\i\t\0\e\n\s\m\q\e\8\p\v\0\j\s\d\3\z\2\d\y\a\7\l\h\h\r\m\5\s\z\r\8\p\b\o\2\d\4\l\f\y\k\p\8\h\d\m\j\a\x\f\j\v\m\l\i\d\x\r\0\1\b\m\i\i\g\w\a\5\p\2\a\z\x\z\b\t\k\4\b\0\0\w\q\1\3\4\j\j\k\f\5\k\w\m\9\m\h\t\d\6\9\o\a\7\a\0\u\i\v\s\b\m\q\h\t\w\d\c\y\e\z\s\v\8\8\v\s\z\6\4\k\v\7\r\9\n\6\6\a\a\4\u\w\t\u\1\g\i\x\g\n\q\k\e\t\g\4\f\5\l\w\l\k\3\f\8\3\s\1\f\j\a\y\6\f\b\0\c\m\l\a\j\y\8\8\h\p\4\r\w\b\i\w\0\i\8\v\z\3\v\v\b\i\n\6\o\5\p\4\6\s\3\t\o\y\z\k\l\y\s\r\n\5\z\g\y\l\g\0\5\h\n\r\a\r\7\0\v\5\t\p\h\s\l\t\n\a\s\h\u\t\y\t\e\m\u\8\d\s\y\p\4\q\9\y\9\d\s\v\x\t\k\r\l\i\l\y\x\h\i\r\u\g\6\i\9\m\u\z\5\b\m\e\k\w\5\7\i\k\a\e\u\z\3\3\8\0\r\c\c\5\g\8\t\7\q\1\v\4\l\e\o\c\s\7\w\8\k\w\k\k\i\2\6\4\d\d\m\5\i\e\w\r\v\m\g\8\b\w\t\9\0\5\e\g\8\6\p\y\i\r\9\4\e\h\8\t\z\3\j\d\8\z\l\5\c\b\u\a\j\c\l\i\m\5\h\6\h\j\v\8\j\3\f\t\d\2\1\s\w\f\k\b\9\g\i\b\a\b\5\u\3\z\4\g\r\g\w\v\2\z\h\e\5\a\a\6\h\9\u\u\u\d\u\y\f\t\c\g\u\i\y\i\y\2\4\h\1\w\5\j\i\f\s\i\e\h\q\l\q\a\v\d\7\q\v\r\r\h\2\4\5\p\6\0\k\q\7\2\l\6\a\d\t\w\5\j\k\c\z\l\z\n\u\6\u\e\u\v\3\g\l\f\7\g\z\g\u\v\3\k\1\3\h\s\q\y\8\8\1\g\9\z\t\d\6\y\y\4\4\2\r\6\g\z\2\5\r\z\1\7\g\x\m\2\0\3\e\n\f\w\4\q\r\6\x\g\i\b\c\1\y\v\9\w\f\l\j\o\k\w\v\7\k\t\u\m\f\5\0\x\z\8\d\v\5\s\a\l\9\a\3\o\k\q\p\e\d\p\x\k\3\3\y\8\y\7\t\5\i\f\l\7\7\r\m\8\m\f\c\9\u\4\c\w\1\0\u\l\d\i\4\k\i\0\x\7\e\k\p\8\b\5\u\f\x\d\s\x\2\3\3\q\c\v\x\f\w\z\k\c\e\1\7\4\h\i\p\z\n\5\h\n\7\x\s\o\a\8\m\d\w\s\k\9\a\2\p\a\h\h\j\q\1\f\x\c\j\a\p\8\c\y\d\v\n\z\0\s\0\8\h\r\h\c\g\w\6\n\r\1\h\v\g\6\7\4\m\n\z\4\l\5\c\d\a\8\5\3\u\c\2\n\f\m\d\x\j\9\9\5\1\l\z\v\o\p\v\f\p\n\6\4\4\r\z\f\a\k\i\k\4\e\d\b\6\h\2\2\2\3\a\5\d\s\u\k\x\4\w\b\1\l\y\h\k\0\r\o\6\3\d\m\6\4\n\t\8\l\9\z\v\f\2\r\c\b\e\b\s\0\5\h\g\r\n\j\h\a\e\b\q\5\q\n\p\k\v\8\a\e\q\k\o\a\j\9\x\j\s\r\6\f\h\d\e\q\z\w\z\h\s\9\x\u\b\r\i\g\7\u\c\n\x\2\n\v\z\p\z\y\d\u\h\7\d\8\e\b\l\4\u\n\3\m\a\r\v\o\v\k\g\4\z\s\n\k\i\7\g\a\4\a\7\7\i\z\2\y\u\r\q\g\z\d\r\h\q\p\o\0\g\t\c\y\b\v\6\q\8\s\p\t\o\3\1\r\s\5\l\c\7\j\x\e\4\z\k\g\d\1\6\g\e\2\4\q\0\5\f\u\0\9\p\j\8\5\f\i\4\j\6\f\f\y\1\y\h\e\v\p\v\b\o\t\e\s\w\q\d\7\a\k\q\4\8\6\v\h\5\l\u\g\1\r\s\3\f\j\3 ]] 00:15:06.968 ************************************ 00:15:06.968 END TEST dd_rw_offset 00:15:06.968 ************************************ 00:15:06.968 00:15:06.968 real 0m1.299s 00:15:06.968 user 0m0.936s 00:15:06.968 sys 0m0.503s 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:06.968 13:53:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:06.968 [2024-05-15 13:53:05.445641] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:06.968 [2024-05-15 13:53:05.445731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62258 ] 00:15:06.968 { 00:15:06.968 "subsystems": [ 00:15:06.968 { 00:15:06.968 "subsystem": "bdev", 00:15:06.968 "config": [ 00:15:06.968 { 00:15:06.968 "params": { 00:15:06.968 "trtype": "pcie", 00:15:06.968 "traddr": "0000:00:10.0", 00:15:06.968 "name": "Nvme0" 00:15:06.968 }, 00:15:06.968 "method": "bdev_nvme_attach_controller" 00:15:06.968 }, 00:15:06.968 { 00:15:06.968 "method": "bdev_wait_for_examine" 00:15:06.968 } 00:15:06.968 ] 00:15:06.968 } 00:15:06.968 ] 00:15:06.968 } 00:15:07.227 [2024-05-15 13:53:05.590971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.227 [2024-05-15 13:53:05.685024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.486  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:07.486 00:15:07.486 13:53:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:07.486 00:15:07.486 real 0m17.474s 00:15:07.486 user 0m12.477s 00:15:07.486 sys 0m6.089s 00:15:07.486 ************************************ 00:15:07.486 END TEST spdk_dd_basic_rw 00:15:07.486 ************************************ 00:15:07.486 13:53:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.486 13:53:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:07.745 13:53:06 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:07.745 13:53:06 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:07.745 13:53:06 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.745 13:53:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:07.745 ************************************ 00:15:07.745 START TEST spdk_dd_posix 00:15:07.745 ************************************ 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:07.745 * Looking for test storage... 00:15:07.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.745 13:53:06 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:15:07.746 * First test run, liburing in use 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:07.746 ************************************ 00:15:07.746 START TEST dd_flag_append 00:15:07.746 ************************************ 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=55v0m9xfeyxie3yc94sdj0n8w49f2kgq 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cp3rz2q1swrap94yvmi18u7b6pniuqpy 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 55v0m9xfeyxie3yc94sdj0n8w49f2kgq 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cp3rz2q1swrap94yvmi18u7b6pniuqpy 00:15:07.746 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:07.746 [2024-05-15 13:53:06.248677] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:07.746 [2024-05-15 13:53:06.248756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62321 ] 00:15:08.006 [2024-05-15 13:53:06.387374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.006 [2024-05-15 13:53:06.489023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.265  Copying: 32/32 [B] (average 31 kBps) 00:15:08.265 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cp3rz2q1swrap94yvmi18u7b6pniuqpy55v0m9xfeyxie3yc94sdj0n8w49f2kgq == \c\p\3\r\z\2\q\1\s\w\r\a\p\9\4\y\v\m\i\1\8\u\7\b\6\p\n\i\u\q\p\y\5\5\v\0\m\9\x\f\e\y\x\i\e\3\y\c\9\4\s\d\j\0\n\8\w\4\9\f\2\k\g\q ]] 00:15:08.265 00:15:08.265 real 0m0.571s 00:15:08.265 user 0m0.339s 00:15:08.265 sys 0m0.230s 00:15:08.265 ************************************ 00:15:08.265 END TEST dd_flag_append 00:15:08.265 ************************************ 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.265 13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:08.525 ************************************ 00:15:08.525 START TEST dd_flag_directory 00:15:08.525 ************************************ 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:08.525 13:53:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:08.525 [2024-05-15 13:53:06.886469] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:08.525 [2024-05-15 13:53:06.886545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62345 ] 00:15:08.525 [2024-05-15 13:53:07.027770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.784 [2024-05-15 13:53:07.129718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.784 [2024-05-15 13:53:07.197392] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:08.784 [2024-05-15 13:53:07.197436] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:08.784 [2024-05-15 13:53:07.197449] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.784 [2024-05-15 13:53:07.290872] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:09.046 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:09.046 [2024-05-15 13:53:07.454930] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:09.046 [2024-05-15 13:53:07.455002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62360 ] 00:15:09.047 [2024-05-15 13:53:07.597485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.306 [2024-05-15 13:53:07.695389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.306 [2024-05-15 13:53:07.762484] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:09.306 [2024-05-15 13:53:07.762535] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:09.306 [2024-05-15 13:53:07.762549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.306 [2024-05-15 13:53:07.854491] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.566 00:15:09.566 real 0m1.136s 00:15:09.566 user 0m0.674s 00:15:09.566 sys 0m0.253s 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.566 13:53:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 ************************************ 00:15:09.566 END TEST dd_flag_directory 00:15:09.566 ************************************ 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:09.566 ************************************ 00:15:09.566 START TEST dd_flag_nofollow 00:15:09.566 ************************************ 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:09.566 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:09.566 [2024-05-15 13:53:08.106907] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:09.566 [2024-05-15 13:53:08.106984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62383 ] 00:15:09.825 [2024-05-15 13:53:08.246967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.825 [2024-05-15 13:53:08.345313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.084 [2024-05-15 13:53:08.412277] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:10.084 [2024-05-15 13:53:08.412329] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:10.084 [2024-05-15 13:53:08.412343] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:10.084 [2024-05-15 13:53:08.504099] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:10.084 13:53:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:10.343 [2024-05-15 13:53:08.665373] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:10.343 [2024-05-15 13:53:08.665461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62398 ] 00:15:10.343 [2024-05-15 13:53:08.805801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.343 [2024-05-15 13:53:08.900603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.601 [2024-05-15 13:53:08.967656] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:10.601 [2024-05-15 13:53:08.967710] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:10.601 [2024-05-15 13:53:08.967724] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:10.601 [2024-05-15 13:53:09.058994] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:15:10.860 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:10.860 [2024-05-15 13:53:09.230930] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:10.860 [2024-05-15 13:53:09.230996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62400 ] 00:15:10.860 [2024-05-15 13:53:09.372627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.118 [2024-05-15 13:53:09.458380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.376  Copying: 512/512 [B] (average 500 kBps) 00:15:11.376 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ wyrunpyop6jsnfqax40bcdu62pwvjf3dtgobn5lebpxqq14o3ltnwkjnps7aveme8foulufcy8j21t1c4nh3zmyczdkp9dwsip1ornjo5wfvr9f8x8025xibo6dvlq09hmsg5awxostiw5179ipi6sbd2uir0n1b26d46isv41g8vp6suz52s83q7apipkvcpf446grrzzifdmif0ym2pepk5bcg9emsz79gtzi5n36rm6itqkufagjbgtljv6qjvc0un1hlwhpe8rh24k23hwqp17a69rdvt42hh9eze3nfgzyvpi7vu2ndk4o43zsk1akzmp209s9kp9uxt1tvylf8songuiblblmh3ru9xhfbxsd68fp1je2ab632yudruqr638vv9876zulu05j5idfkuuzxkjnbof376g0f3bf2pl25i8iqsowfyjiyedr9rggtsjsdgl37fozo3qyuydj4uuyrrsydqbqusn3htfhkmmnxmdx1ej80gyz03nny == \w\y\r\u\n\p\y\o\p\6\j\s\n\f\q\a\x\4\0\b\c\d\u\6\2\p\w\v\j\f\3\d\t\g\o\b\n\5\l\e\b\p\x\q\q\1\4\o\3\l\t\n\w\k\j\n\p\s\7\a\v\e\m\e\8\f\o\u\l\u\f\c\y\8\j\2\1\t\1\c\4\n\h\3\z\m\y\c\z\d\k\p\9\d\w\s\i\p\1\o\r\n\j\o\5\w\f\v\r\9\f\8\x\8\0\2\5\x\i\b\o\6\d\v\l\q\0\9\h\m\s\g\5\a\w\x\o\s\t\i\w\5\1\7\9\i\p\i\6\s\b\d\2\u\i\r\0\n\1\b\2\6\d\4\6\i\s\v\4\1\g\8\v\p\6\s\u\z\5\2\s\8\3\q\7\a\p\i\p\k\v\c\p\f\4\4\6\g\r\r\z\z\i\f\d\m\i\f\0\y\m\2\p\e\p\k\5\b\c\g\9\e\m\s\z\7\9\g\t\z\i\5\n\3\6\r\m\6\i\t\q\k\u\f\a\g\j\b\g\t\l\j\v\6\q\j\v\c\0\u\n\1\h\l\w\h\p\e\8\r\h\2\4\k\2\3\h\w\q\p\1\7\a\6\9\r\d\v\t\4\2\h\h\9\e\z\e\3\n\f\g\z\y\v\p\i\7\v\u\2\n\d\k\4\o\4\3\z\s\k\1\a\k\z\m\p\2\0\9\s\9\k\p\9\u\x\t\1\t\v\y\l\f\8\s\o\n\g\u\i\b\l\b\l\m\h\3\r\u\9\x\h\f\b\x\s\d\6\8\f\p\1\j\e\2\a\b\6\3\2\y\u\d\r\u\q\r\6\3\8\v\v\9\8\7\6\z\u\l\u\0\5\j\5\i\d\f\k\u\u\z\x\k\j\n\b\o\f\3\7\6\g\0\f\3\b\f\2\p\l\2\5\i\8\i\q\s\o\w\f\y\j\i\y\e\d\r\9\r\g\g\t\s\j\s\d\g\l\3\7\f\o\z\o\3\q\y\u\y\d\j\4\u\u\y\r\r\s\y\d\q\b\q\u\s\n\3\h\t\f\h\k\m\m\n\x\m\d\x\1\e\j\8\0\g\y\z\0\3\n\n\y ]] 00:15:11.376 00:15:11.376 real 0m1.691s 00:15:11.376 user 0m0.993s 00:15:11.376 sys 0m0.486s 00:15:11.376 ************************************ 00:15:11.376 END TEST dd_flag_nofollow 00:15:11.376 ************************************ 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:11.376 ************************************ 00:15:11.376 START TEST dd_flag_noatime 00:15:11.376 ************************************ 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715781189 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715781189 00:15:11.376 13:53:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:15:12.310 13:53:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:12.569 [2024-05-15 13:53:10.879785] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:12.569 [2024-05-15 13:53:10.879855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62448 ] 00:15:12.569 [2024-05-15 13:53:11.020589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.569 [2024-05-15 13:53:11.113312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.828  Copying: 512/512 [B] (average 500 kBps) 00:15:12.828 00:15:12.828 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:13.087 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715781189 )) 00:15:13.087 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:13.087 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715781189 )) 00:15:13.087 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:13.087 [2024-05-15 13:53:11.435317] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:13.087 [2024-05-15 13:53:11.435390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:15:13.087 [2024-05-15 13:53:11.573158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.346 [2024-05-15 13:53:11.658656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.605  Copying: 512/512 [B] (average 500 kBps) 00:15:13.605 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715781191 )) 00:15:13.605 00:15:13.605 real 0m2.134s 00:15:13.605 user 0m0.645s 00:15:13.605 sys 0m0.488s 00:15:13.605 ************************************ 00:15:13.605 END TEST dd_flag_noatime 00:15:13.605 ************************************ 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:13.605 13:53:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:13.605 ************************************ 00:15:13.605 START TEST dd_flags_misc 00:15:13.605 ************************************ 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:13.605 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:13.605 [2024-05-15 13:53:12.064688] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:13.606 [2024-05-15 13:53:12.064765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62490 ] 00:15:13.884 [2024-05-15 13:53:12.204329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.884 [2024-05-15 13:53:12.292008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.143  Copying: 512/512 [B] (average 500 kBps) 00:15:14.143 00:15:14.143 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4gfz44s3mr1752xy0bmeso3tem8n3oonr7fygupp7eiwbebrq3ntqyuyerqpl7n4fdfbxm3f7nrx49qysm2wyizsb9081ipvqozogedpgbtpiwnnp7p71l9ohmrlb8qxihkpzr1mfn5vhilza8wuqzaujd8qnudcdynx4dtvf5hqoxkiok6ax7pgk24nbfog0hbwauxmcjj7wn3isyx4t0hgx5ci96hvqfm2urcqgl59oa4m81k2sd01ddcigf6n9um5xi7vo07hvj6dzy1tdy5ob355ed5qqpvyu7suq0yrzpbnglpl7m6lg35b9jh8okvpjeqdxxdkmtyr10sec9wnf3y6yztj8edfs0jjappcj8xr3mvs4v1u7eih5wg1yfkb3y5umhhqjhubcidggbgfwlb5aj036qdacry4bscpoj5klgdign1o1e80opehdtszrhb8b4jyzh90131xkr6mqja9axn7bclbujo8eftpvqu52v4e2ikthksprmrn == \4\g\f\z\4\4\s\3\m\r\1\7\5\2\x\y\0\b\m\e\s\o\3\t\e\m\8\n\3\o\o\n\r\7\f\y\g\u\p\p\7\e\i\w\b\e\b\r\q\3\n\t\q\y\u\y\e\r\q\p\l\7\n\4\f\d\f\b\x\m\3\f\7\n\r\x\4\9\q\y\s\m\2\w\y\i\z\s\b\9\0\8\1\i\p\v\q\o\z\o\g\e\d\p\g\b\t\p\i\w\n\n\p\7\p\7\1\l\9\o\h\m\r\l\b\8\q\x\i\h\k\p\z\r\1\m\f\n\5\v\h\i\l\z\a\8\w\u\q\z\a\u\j\d\8\q\n\u\d\c\d\y\n\x\4\d\t\v\f\5\h\q\o\x\k\i\o\k\6\a\x\7\p\g\k\2\4\n\b\f\o\g\0\h\b\w\a\u\x\m\c\j\j\7\w\n\3\i\s\y\x\4\t\0\h\g\x\5\c\i\9\6\h\v\q\f\m\2\u\r\c\q\g\l\5\9\o\a\4\m\8\1\k\2\s\d\0\1\d\d\c\i\g\f\6\n\9\u\m\5\x\i\7\v\o\0\7\h\v\j\6\d\z\y\1\t\d\y\5\o\b\3\5\5\e\d\5\q\q\p\v\y\u\7\s\u\q\0\y\r\z\p\b\n\g\l\p\l\7\m\6\l\g\3\5\b\9\j\h\8\o\k\v\p\j\e\q\d\x\x\d\k\m\t\y\r\1\0\s\e\c\9\w\n\f\3\y\6\y\z\t\j\8\e\d\f\s\0\j\j\a\p\p\c\j\8\x\r\3\m\v\s\4\v\1\u\7\e\i\h\5\w\g\1\y\f\k\b\3\y\5\u\m\h\h\q\j\h\u\b\c\i\d\g\g\b\g\f\w\l\b\5\a\j\0\3\6\q\d\a\c\r\y\4\b\s\c\p\o\j\5\k\l\g\d\i\g\n\1\o\1\e\8\0\o\p\e\h\d\t\s\z\r\h\b\8\b\4\j\y\z\h\9\0\1\3\1\x\k\r\6\m\q\j\a\9\a\x\n\7\b\c\l\b\u\j\o\8\e\f\t\p\v\q\u\5\2\v\4\e\2\i\k\t\h\k\s\p\r\m\r\n ]] 00:15:14.143 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:14.143 13:53:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:14.143 [2024-05-15 13:53:12.614947] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:14.143 [2024-05-15 13:53:12.615030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62500 ] 00:15:14.403 [2024-05-15 13:53:12.759267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.403 [2024-05-15 13:53:12.845417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.662  Copying: 512/512 [B] (average 500 kBps) 00:15:14.662 00:15:14.662 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4gfz44s3mr1752xy0bmeso3tem8n3oonr7fygupp7eiwbebrq3ntqyuyerqpl7n4fdfbxm3f7nrx49qysm2wyizsb9081ipvqozogedpgbtpiwnnp7p71l9ohmrlb8qxihkpzr1mfn5vhilza8wuqzaujd8qnudcdynx4dtvf5hqoxkiok6ax7pgk24nbfog0hbwauxmcjj7wn3isyx4t0hgx5ci96hvqfm2urcqgl59oa4m81k2sd01ddcigf6n9um5xi7vo07hvj6dzy1tdy5ob355ed5qqpvyu7suq0yrzpbnglpl7m6lg35b9jh8okvpjeqdxxdkmtyr10sec9wnf3y6yztj8edfs0jjappcj8xr3mvs4v1u7eih5wg1yfkb3y5umhhqjhubcidggbgfwlb5aj036qdacry4bscpoj5klgdign1o1e80opehdtszrhb8b4jyzh90131xkr6mqja9axn7bclbujo8eftpvqu52v4e2ikthksprmrn == \4\g\f\z\4\4\s\3\m\r\1\7\5\2\x\y\0\b\m\e\s\o\3\t\e\m\8\n\3\o\o\n\r\7\f\y\g\u\p\p\7\e\i\w\b\e\b\r\q\3\n\t\q\y\u\y\e\r\q\p\l\7\n\4\f\d\f\b\x\m\3\f\7\n\r\x\4\9\q\y\s\m\2\w\y\i\z\s\b\9\0\8\1\i\p\v\q\o\z\o\g\e\d\p\g\b\t\p\i\w\n\n\p\7\p\7\1\l\9\o\h\m\r\l\b\8\q\x\i\h\k\p\z\r\1\m\f\n\5\v\h\i\l\z\a\8\w\u\q\z\a\u\j\d\8\q\n\u\d\c\d\y\n\x\4\d\t\v\f\5\h\q\o\x\k\i\o\k\6\a\x\7\p\g\k\2\4\n\b\f\o\g\0\h\b\w\a\u\x\m\c\j\j\7\w\n\3\i\s\y\x\4\t\0\h\g\x\5\c\i\9\6\h\v\q\f\m\2\u\r\c\q\g\l\5\9\o\a\4\m\8\1\k\2\s\d\0\1\d\d\c\i\g\f\6\n\9\u\m\5\x\i\7\v\o\0\7\h\v\j\6\d\z\y\1\t\d\y\5\o\b\3\5\5\e\d\5\q\q\p\v\y\u\7\s\u\q\0\y\r\z\p\b\n\g\l\p\l\7\m\6\l\g\3\5\b\9\j\h\8\o\k\v\p\j\e\q\d\x\x\d\k\m\t\y\r\1\0\s\e\c\9\w\n\f\3\y\6\y\z\t\j\8\e\d\f\s\0\j\j\a\p\p\c\j\8\x\r\3\m\v\s\4\v\1\u\7\e\i\h\5\w\g\1\y\f\k\b\3\y\5\u\m\h\h\q\j\h\u\b\c\i\d\g\g\b\g\f\w\l\b\5\a\j\0\3\6\q\d\a\c\r\y\4\b\s\c\p\o\j\5\k\l\g\d\i\g\n\1\o\1\e\8\0\o\p\e\h\d\t\s\z\r\h\b\8\b\4\j\y\z\h\9\0\1\3\1\x\k\r\6\m\q\j\a\9\a\x\n\7\b\c\l\b\u\j\o\8\e\f\t\p\v\q\u\5\2\v\4\e\2\i\k\t\h\k\s\p\r\m\r\n ]] 00:15:14.662 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:14.662 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:14.662 [2024-05-15 13:53:13.165296] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:14.662 [2024-05-15 13:53:13.165374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:15:14.921 [2024-05-15 13:53:13.306796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.921 [2024-05-15 13:53:13.402472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.180  Copying: 512/512 [B] (average 166 kBps) 00:15:15.180 00:15:15.180 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4gfz44s3mr1752xy0bmeso3tem8n3oonr7fygupp7eiwbebrq3ntqyuyerqpl7n4fdfbxm3f7nrx49qysm2wyizsb9081ipvqozogedpgbtpiwnnp7p71l9ohmrlb8qxihkpzr1mfn5vhilza8wuqzaujd8qnudcdynx4dtvf5hqoxkiok6ax7pgk24nbfog0hbwauxmcjj7wn3isyx4t0hgx5ci96hvqfm2urcqgl59oa4m81k2sd01ddcigf6n9um5xi7vo07hvj6dzy1tdy5ob355ed5qqpvyu7suq0yrzpbnglpl7m6lg35b9jh8okvpjeqdxxdkmtyr10sec9wnf3y6yztj8edfs0jjappcj8xr3mvs4v1u7eih5wg1yfkb3y5umhhqjhubcidggbgfwlb5aj036qdacry4bscpoj5klgdign1o1e80opehdtszrhb8b4jyzh90131xkr6mqja9axn7bclbujo8eftpvqu52v4e2ikthksprmrn == \4\g\f\z\4\4\s\3\m\r\1\7\5\2\x\y\0\b\m\e\s\o\3\t\e\m\8\n\3\o\o\n\r\7\f\y\g\u\p\p\7\e\i\w\b\e\b\r\q\3\n\t\q\y\u\y\e\r\q\p\l\7\n\4\f\d\f\b\x\m\3\f\7\n\r\x\4\9\q\y\s\m\2\w\y\i\z\s\b\9\0\8\1\i\p\v\q\o\z\o\g\e\d\p\g\b\t\p\i\w\n\n\p\7\p\7\1\l\9\o\h\m\r\l\b\8\q\x\i\h\k\p\z\r\1\m\f\n\5\v\h\i\l\z\a\8\w\u\q\z\a\u\j\d\8\q\n\u\d\c\d\y\n\x\4\d\t\v\f\5\h\q\o\x\k\i\o\k\6\a\x\7\p\g\k\2\4\n\b\f\o\g\0\h\b\w\a\u\x\m\c\j\j\7\w\n\3\i\s\y\x\4\t\0\h\g\x\5\c\i\9\6\h\v\q\f\m\2\u\r\c\q\g\l\5\9\o\a\4\m\8\1\k\2\s\d\0\1\d\d\c\i\g\f\6\n\9\u\m\5\x\i\7\v\o\0\7\h\v\j\6\d\z\y\1\t\d\y\5\o\b\3\5\5\e\d\5\q\q\p\v\y\u\7\s\u\q\0\y\r\z\p\b\n\g\l\p\l\7\m\6\l\g\3\5\b\9\j\h\8\o\k\v\p\j\e\q\d\x\x\d\k\m\t\y\r\1\0\s\e\c\9\w\n\f\3\y\6\y\z\t\j\8\e\d\f\s\0\j\j\a\p\p\c\j\8\x\r\3\m\v\s\4\v\1\u\7\e\i\h\5\w\g\1\y\f\k\b\3\y\5\u\m\h\h\q\j\h\u\b\c\i\d\g\g\b\g\f\w\l\b\5\a\j\0\3\6\q\d\a\c\r\y\4\b\s\c\p\o\j\5\k\l\g\d\i\g\n\1\o\1\e\8\0\o\p\e\h\d\t\s\z\r\h\b\8\b\4\j\y\z\h\9\0\1\3\1\x\k\r\6\m\q\j\a\9\a\x\n\7\b\c\l\b\u\j\o\8\e\f\t\p\v\q\u\5\2\v\4\e\2\i\k\t\h\k\s\p\r\m\r\n ]] 00:15:15.180 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:15.180 13:53:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:15.180 [2024-05-15 13:53:13.707392] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:15.180 [2024-05-15 13:53:13.707460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:15:15.438 [2024-05-15 13:53:13.846814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.438 [2024-05-15 13:53:13.944754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.698  Copying: 512/512 [B] (average 125 kBps) 00:15:15.698 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4gfz44s3mr1752xy0bmeso3tem8n3oonr7fygupp7eiwbebrq3ntqyuyerqpl7n4fdfbxm3f7nrx49qysm2wyizsb9081ipvqozogedpgbtpiwnnp7p71l9ohmrlb8qxihkpzr1mfn5vhilza8wuqzaujd8qnudcdynx4dtvf5hqoxkiok6ax7pgk24nbfog0hbwauxmcjj7wn3isyx4t0hgx5ci96hvqfm2urcqgl59oa4m81k2sd01ddcigf6n9um5xi7vo07hvj6dzy1tdy5ob355ed5qqpvyu7suq0yrzpbnglpl7m6lg35b9jh8okvpjeqdxxdkmtyr10sec9wnf3y6yztj8edfs0jjappcj8xr3mvs4v1u7eih5wg1yfkb3y5umhhqjhubcidggbgfwlb5aj036qdacry4bscpoj5klgdign1o1e80opehdtszrhb8b4jyzh90131xkr6mqja9axn7bclbujo8eftpvqu52v4e2ikthksprmrn == \4\g\f\z\4\4\s\3\m\r\1\7\5\2\x\y\0\b\m\e\s\o\3\t\e\m\8\n\3\o\o\n\r\7\f\y\g\u\p\p\7\e\i\w\b\e\b\r\q\3\n\t\q\y\u\y\e\r\q\p\l\7\n\4\f\d\f\b\x\m\3\f\7\n\r\x\4\9\q\y\s\m\2\w\y\i\z\s\b\9\0\8\1\i\p\v\q\o\z\o\g\e\d\p\g\b\t\p\i\w\n\n\p\7\p\7\1\l\9\o\h\m\r\l\b\8\q\x\i\h\k\p\z\r\1\m\f\n\5\v\h\i\l\z\a\8\w\u\q\z\a\u\j\d\8\q\n\u\d\c\d\y\n\x\4\d\t\v\f\5\h\q\o\x\k\i\o\k\6\a\x\7\p\g\k\2\4\n\b\f\o\g\0\h\b\w\a\u\x\m\c\j\j\7\w\n\3\i\s\y\x\4\t\0\h\g\x\5\c\i\9\6\h\v\q\f\m\2\u\r\c\q\g\l\5\9\o\a\4\m\8\1\k\2\s\d\0\1\d\d\c\i\g\f\6\n\9\u\m\5\x\i\7\v\o\0\7\h\v\j\6\d\z\y\1\t\d\y\5\o\b\3\5\5\e\d\5\q\q\p\v\y\u\7\s\u\q\0\y\r\z\p\b\n\g\l\p\l\7\m\6\l\g\3\5\b\9\j\h\8\o\k\v\p\j\e\q\d\x\x\d\k\m\t\y\r\1\0\s\e\c\9\w\n\f\3\y\6\y\z\t\j\8\e\d\f\s\0\j\j\a\p\p\c\j\8\x\r\3\m\v\s\4\v\1\u\7\e\i\h\5\w\g\1\y\f\k\b\3\y\5\u\m\h\h\q\j\h\u\b\c\i\d\g\g\b\g\f\w\l\b\5\a\j\0\3\6\q\d\a\c\r\y\4\b\s\c\p\o\j\5\k\l\g\d\i\g\n\1\o\1\e\8\0\o\p\e\h\d\t\s\z\r\h\b\8\b\4\j\y\z\h\9\0\1\3\1\x\k\r\6\m\q\j\a\9\a\x\n\7\b\c\l\b\u\j\o\8\e\f\t\p\v\q\u\5\2\v\4\e\2\i\k\t\h\k\s\p\r\m\r\n ]] 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:15.698 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:15.957 [2024-05-15 13:53:14.275195] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:15.957 [2024-05-15 13:53:14.275258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:15:15.957 [2024-05-15 13:53:14.416401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.957 [2024-05-15 13:53:14.509580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.216  Copying: 512/512 [B] (average 500 kBps) 00:15:16.216 00:15:16.475 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0cx3b3tumedfjk03qye5zg2fzat7fakj3g86pqmc8agxmku3oc7ylpfjncwmhz8cv61svatltpzd0wfovltlsofp30gyk544pbg76910ltp3rrcc0miivo3c6gko9pn2gj6pnno4v2plp3cuv1uv1nqbkgard0zkqcbrhlw7qhtb00hm87t9256h1vxser5w84htlrw9m0tm9eafjzkcyu3q0ohurv1hjlhl26b6k65wdwa6uxq94je4kfpsusdl6ff5hpxtppogpqowb5vhyfyqgjctkne54kvfat0czu6ea4pmyd45bb12sth936nsjxev5vc1h1wm3nsl1vy9gp7beyic7i9x59upwajlzyr0ft2beego22w7q7raermcouykktvvadbfm3n33ja2zbvwdrj0lmf4fbbup6wqqzcoq7zci6oc7vqwrfekqike6zbqho6voo56c6yqar53480upxdf8mwz9tht5l7xq29w8vg8dnmtzuneothcfmr3 == \0\c\x\3\b\3\t\u\m\e\d\f\j\k\0\3\q\y\e\5\z\g\2\f\z\a\t\7\f\a\k\j\3\g\8\6\p\q\m\c\8\a\g\x\m\k\u\3\o\c\7\y\l\p\f\j\n\c\w\m\h\z\8\c\v\6\1\s\v\a\t\l\t\p\z\d\0\w\f\o\v\l\t\l\s\o\f\p\3\0\g\y\k\5\4\4\p\b\g\7\6\9\1\0\l\t\p\3\r\r\c\c\0\m\i\i\v\o\3\c\6\g\k\o\9\p\n\2\g\j\6\p\n\n\o\4\v\2\p\l\p\3\c\u\v\1\u\v\1\n\q\b\k\g\a\r\d\0\z\k\q\c\b\r\h\l\w\7\q\h\t\b\0\0\h\m\8\7\t\9\2\5\6\h\1\v\x\s\e\r\5\w\8\4\h\t\l\r\w\9\m\0\t\m\9\e\a\f\j\z\k\c\y\u\3\q\0\o\h\u\r\v\1\h\j\l\h\l\2\6\b\6\k\6\5\w\d\w\a\6\u\x\q\9\4\j\e\4\k\f\p\s\u\s\d\l\6\f\f\5\h\p\x\t\p\p\o\g\p\q\o\w\b\5\v\h\y\f\y\q\g\j\c\t\k\n\e\5\4\k\v\f\a\t\0\c\z\u\6\e\a\4\p\m\y\d\4\5\b\b\1\2\s\t\h\9\3\6\n\s\j\x\e\v\5\v\c\1\h\1\w\m\3\n\s\l\1\v\y\9\g\p\7\b\e\y\i\c\7\i\9\x\5\9\u\p\w\a\j\l\z\y\r\0\f\t\2\b\e\e\g\o\2\2\w\7\q\7\r\a\e\r\m\c\o\u\y\k\k\t\v\v\a\d\b\f\m\3\n\3\3\j\a\2\z\b\v\w\d\r\j\0\l\m\f\4\f\b\b\u\p\6\w\q\q\z\c\o\q\7\z\c\i\6\o\c\7\v\q\w\r\f\e\k\q\i\k\e\6\z\b\q\h\o\6\v\o\o\5\6\c\6\y\q\a\r\5\3\4\8\0\u\p\x\d\f\8\m\w\z\9\t\h\t\5\l\7\x\q\2\9\w\8\v\g\8\d\n\m\t\z\u\n\e\o\t\h\c\f\m\r\3 ]] 00:15:16.475 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:16.475 13:53:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:16.475 [2024-05-15 13:53:14.808809] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:16.475 [2024-05-15 13:53:14.808875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62543 ] 00:15:16.475 [2024-05-15 13:53:14.943166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.747 [2024-05-15 13:53:15.036104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.747  Copying: 512/512 [B] (average 500 kBps) 00:15:16.747 00:15:17.008 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0cx3b3tumedfjk03qye5zg2fzat7fakj3g86pqmc8agxmku3oc7ylpfjncwmhz8cv61svatltpzd0wfovltlsofp30gyk544pbg76910ltp3rrcc0miivo3c6gko9pn2gj6pnno4v2plp3cuv1uv1nqbkgard0zkqcbrhlw7qhtb00hm87t9256h1vxser5w84htlrw9m0tm9eafjzkcyu3q0ohurv1hjlhl26b6k65wdwa6uxq94je4kfpsusdl6ff5hpxtppogpqowb5vhyfyqgjctkne54kvfat0czu6ea4pmyd45bb12sth936nsjxev5vc1h1wm3nsl1vy9gp7beyic7i9x59upwajlzyr0ft2beego22w7q7raermcouykktvvadbfm3n33ja2zbvwdrj0lmf4fbbup6wqqzcoq7zci6oc7vqwrfekqike6zbqho6voo56c6yqar53480upxdf8mwz9tht5l7xq29w8vg8dnmtzuneothcfmr3 == \0\c\x\3\b\3\t\u\m\e\d\f\j\k\0\3\q\y\e\5\z\g\2\f\z\a\t\7\f\a\k\j\3\g\8\6\p\q\m\c\8\a\g\x\m\k\u\3\o\c\7\y\l\p\f\j\n\c\w\m\h\z\8\c\v\6\1\s\v\a\t\l\t\p\z\d\0\w\f\o\v\l\t\l\s\o\f\p\3\0\g\y\k\5\4\4\p\b\g\7\6\9\1\0\l\t\p\3\r\r\c\c\0\m\i\i\v\o\3\c\6\g\k\o\9\p\n\2\g\j\6\p\n\n\o\4\v\2\p\l\p\3\c\u\v\1\u\v\1\n\q\b\k\g\a\r\d\0\z\k\q\c\b\r\h\l\w\7\q\h\t\b\0\0\h\m\8\7\t\9\2\5\6\h\1\v\x\s\e\r\5\w\8\4\h\t\l\r\w\9\m\0\t\m\9\e\a\f\j\z\k\c\y\u\3\q\0\o\h\u\r\v\1\h\j\l\h\l\2\6\b\6\k\6\5\w\d\w\a\6\u\x\q\9\4\j\e\4\k\f\p\s\u\s\d\l\6\f\f\5\h\p\x\t\p\p\o\g\p\q\o\w\b\5\v\h\y\f\y\q\g\j\c\t\k\n\e\5\4\k\v\f\a\t\0\c\z\u\6\e\a\4\p\m\y\d\4\5\b\b\1\2\s\t\h\9\3\6\n\s\j\x\e\v\5\v\c\1\h\1\w\m\3\n\s\l\1\v\y\9\g\p\7\b\e\y\i\c\7\i\9\x\5\9\u\p\w\a\j\l\z\y\r\0\f\t\2\b\e\e\g\o\2\2\w\7\q\7\r\a\e\r\m\c\o\u\y\k\k\t\v\v\a\d\b\f\m\3\n\3\3\j\a\2\z\b\v\w\d\r\j\0\l\m\f\4\f\b\b\u\p\6\w\q\q\z\c\o\q\7\z\c\i\6\o\c\7\v\q\w\r\f\e\k\q\i\k\e\6\z\b\q\h\o\6\v\o\o\5\6\c\6\y\q\a\r\5\3\4\8\0\u\p\x\d\f\8\m\w\z\9\t\h\t\5\l\7\x\q\2\9\w\8\v\g\8\d\n\m\t\z\u\n\e\o\t\h\c\f\m\r\3 ]] 00:15:17.008 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:17.008 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:17.008 [2024-05-15 13:53:15.333323] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:17.008 [2024-05-15 13:53:15.333391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62547 ] 00:15:17.008 [2024-05-15 13:53:15.473014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.008 [2024-05-15 13:53:15.560473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.283  Copying: 512/512 [B] (average 125 kBps) 00:15:17.283 00:15:17.283 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0cx3b3tumedfjk03qye5zg2fzat7fakj3g86pqmc8agxmku3oc7ylpfjncwmhz8cv61svatltpzd0wfovltlsofp30gyk544pbg76910ltp3rrcc0miivo3c6gko9pn2gj6pnno4v2plp3cuv1uv1nqbkgard0zkqcbrhlw7qhtb00hm87t9256h1vxser5w84htlrw9m0tm9eafjzkcyu3q0ohurv1hjlhl26b6k65wdwa6uxq94je4kfpsusdl6ff5hpxtppogpqowb5vhyfyqgjctkne54kvfat0czu6ea4pmyd45bb12sth936nsjxev5vc1h1wm3nsl1vy9gp7beyic7i9x59upwajlzyr0ft2beego22w7q7raermcouykktvvadbfm3n33ja2zbvwdrj0lmf4fbbup6wqqzcoq7zci6oc7vqwrfekqike6zbqho6voo56c6yqar53480upxdf8mwz9tht5l7xq29w8vg8dnmtzuneothcfmr3 == \0\c\x\3\b\3\t\u\m\e\d\f\j\k\0\3\q\y\e\5\z\g\2\f\z\a\t\7\f\a\k\j\3\g\8\6\p\q\m\c\8\a\g\x\m\k\u\3\o\c\7\y\l\p\f\j\n\c\w\m\h\z\8\c\v\6\1\s\v\a\t\l\t\p\z\d\0\w\f\o\v\l\t\l\s\o\f\p\3\0\g\y\k\5\4\4\p\b\g\7\6\9\1\0\l\t\p\3\r\r\c\c\0\m\i\i\v\o\3\c\6\g\k\o\9\p\n\2\g\j\6\p\n\n\o\4\v\2\p\l\p\3\c\u\v\1\u\v\1\n\q\b\k\g\a\r\d\0\z\k\q\c\b\r\h\l\w\7\q\h\t\b\0\0\h\m\8\7\t\9\2\5\6\h\1\v\x\s\e\r\5\w\8\4\h\t\l\r\w\9\m\0\t\m\9\e\a\f\j\z\k\c\y\u\3\q\0\o\h\u\r\v\1\h\j\l\h\l\2\6\b\6\k\6\5\w\d\w\a\6\u\x\q\9\4\j\e\4\k\f\p\s\u\s\d\l\6\f\f\5\h\p\x\t\p\p\o\g\p\q\o\w\b\5\v\h\y\f\y\q\g\j\c\t\k\n\e\5\4\k\v\f\a\t\0\c\z\u\6\e\a\4\p\m\y\d\4\5\b\b\1\2\s\t\h\9\3\6\n\s\j\x\e\v\5\v\c\1\h\1\w\m\3\n\s\l\1\v\y\9\g\p\7\b\e\y\i\c\7\i\9\x\5\9\u\p\w\a\j\l\z\y\r\0\f\t\2\b\e\e\g\o\2\2\w\7\q\7\r\a\e\r\m\c\o\u\y\k\k\t\v\v\a\d\b\f\m\3\n\3\3\j\a\2\z\b\v\w\d\r\j\0\l\m\f\4\f\b\b\u\p\6\w\q\q\z\c\o\q\7\z\c\i\6\o\c\7\v\q\w\r\f\e\k\q\i\k\e\6\z\b\q\h\o\6\v\o\o\5\6\c\6\y\q\a\r\5\3\4\8\0\u\p\x\d\f\8\m\w\z\9\t\h\t\5\l\7\x\q\2\9\w\8\v\g\8\d\n\m\t\z\u\n\e\o\t\h\c\f\m\r\3 ]] 00:15:17.283 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:17.283 13:53:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:17.542 [2024-05-15 13:53:15.879781] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:17.542 [2024-05-15 13:53:15.879847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62561 ] 00:15:17.542 [2024-05-15 13:53:16.021150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.801 [2024-05-15 13:53:16.113516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.060  Copying: 512/512 [B] (average 166 kBps) 00:15:18.060 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0cx3b3tumedfjk03qye5zg2fzat7fakj3g86pqmc8agxmku3oc7ylpfjncwmhz8cv61svatltpzd0wfovltlsofp30gyk544pbg76910ltp3rrcc0miivo3c6gko9pn2gj6pnno4v2plp3cuv1uv1nqbkgard0zkqcbrhlw7qhtb00hm87t9256h1vxser5w84htlrw9m0tm9eafjzkcyu3q0ohurv1hjlhl26b6k65wdwa6uxq94je4kfpsusdl6ff5hpxtppogpqowb5vhyfyqgjctkne54kvfat0czu6ea4pmyd45bb12sth936nsjxev5vc1h1wm3nsl1vy9gp7beyic7i9x59upwajlzyr0ft2beego22w7q7raermcouykktvvadbfm3n33ja2zbvwdrj0lmf4fbbup6wqqzcoq7zci6oc7vqwrfekqike6zbqho6voo56c6yqar53480upxdf8mwz9tht5l7xq29w8vg8dnmtzuneothcfmr3 == \0\c\x\3\b\3\t\u\m\e\d\f\j\k\0\3\q\y\e\5\z\g\2\f\z\a\t\7\f\a\k\j\3\g\8\6\p\q\m\c\8\a\g\x\m\k\u\3\o\c\7\y\l\p\f\j\n\c\w\m\h\z\8\c\v\6\1\s\v\a\t\l\t\p\z\d\0\w\f\o\v\l\t\l\s\o\f\p\3\0\g\y\k\5\4\4\p\b\g\7\6\9\1\0\l\t\p\3\r\r\c\c\0\m\i\i\v\o\3\c\6\g\k\o\9\p\n\2\g\j\6\p\n\n\o\4\v\2\p\l\p\3\c\u\v\1\u\v\1\n\q\b\k\g\a\r\d\0\z\k\q\c\b\r\h\l\w\7\q\h\t\b\0\0\h\m\8\7\t\9\2\5\6\h\1\v\x\s\e\r\5\w\8\4\h\t\l\r\w\9\m\0\t\m\9\e\a\f\j\z\k\c\y\u\3\q\0\o\h\u\r\v\1\h\j\l\h\l\2\6\b\6\k\6\5\w\d\w\a\6\u\x\q\9\4\j\e\4\k\f\p\s\u\s\d\l\6\f\f\5\h\p\x\t\p\p\o\g\p\q\o\w\b\5\v\h\y\f\y\q\g\j\c\t\k\n\e\5\4\k\v\f\a\t\0\c\z\u\6\e\a\4\p\m\y\d\4\5\b\b\1\2\s\t\h\9\3\6\n\s\j\x\e\v\5\v\c\1\h\1\w\m\3\n\s\l\1\v\y\9\g\p\7\b\e\y\i\c\7\i\9\x\5\9\u\p\w\a\j\l\z\y\r\0\f\t\2\b\e\e\g\o\2\2\w\7\q\7\r\a\e\r\m\c\o\u\y\k\k\t\v\v\a\d\b\f\m\3\n\3\3\j\a\2\z\b\v\w\d\r\j\0\l\m\f\4\f\b\b\u\p\6\w\q\q\z\c\o\q\7\z\c\i\6\o\c\7\v\q\w\r\f\e\k\q\i\k\e\6\z\b\q\h\o\6\v\o\o\5\6\c\6\y\q\a\r\5\3\4\8\0\u\p\x\d\f\8\m\w\z\9\t\h\t\5\l\7\x\q\2\9\w\8\v\g\8\d\n\m\t\z\u\n\e\o\t\h\c\f\m\r\3 ]] 00:15:18.060 00:15:18.060 real 0m4.383s 00:15:18.060 user 0m2.560s 00:15:18.060 sys 0m1.798s 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:18.060 ************************************ 00:15:18.060 END TEST dd_flags_misc 00:15:18.060 ************************************ 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:15:18.060 * Second test run, disabling liburing, forcing AIO 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:18.060 ************************************ 00:15:18.060 START TEST dd_flag_append_forced_aio 00:15:18.060 ************************************ 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=fdpr2nskyk4om7aal7fpenz5vxi3bjbv 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=1shacrndq8xlcxjqyziktiwx0uh9r1ge 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s fdpr2nskyk4om7aal7fpenz5vxi3bjbv 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 1shacrndq8xlcxjqyziktiwx0uh9r1ge 00:15:18.060 13:53:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:18.060 [2024-05-15 13:53:16.517225] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:18.060 [2024-05-15 13:53:16.517288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62591 ] 00:15:18.320 [2024-05-15 13:53:16.657348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.320 [2024-05-15 13:53:16.742772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.579  Copying: 32/32 [B] (average 31 kBps) 00:15:18.579 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 1shacrndq8xlcxjqyziktiwx0uh9r1gefdpr2nskyk4om7aal7fpenz5vxi3bjbv == \1\s\h\a\c\r\n\d\q\8\x\l\c\x\j\q\y\z\i\k\t\i\w\x\0\u\h\9\r\1\g\e\f\d\p\r\2\n\s\k\y\k\4\o\m\7\a\a\l\7\f\p\e\n\z\5\v\x\i\3\b\j\b\v ]] 00:15:18.579 00:15:18.579 real 0m0.577s 00:15:18.579 user 0m0.323s 00:15:18.579 sys 0m0.128s 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:18.579 ************************************ 00:15:18.579 END TEST dd_flag_append_forced_aio 00:15:18.579 ************************************ 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:18.579 ************************************ 00:15:18.579 START TEST dd_flag_directory_forced_aio 00:15:18.579 ************************************ 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:18.579 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:18.838 [2024-05-15 13:53:17.162191] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:18.838 [2024-05-15 13:53:17.162268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62617 ] 00:15:18.838 [2024-05-15 13:53:17.303739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.097 [2024-05-15 13:53:17.403558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.097 [2024-05-15 13:53:17.470628] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:19.097 [2024-05-15 13:53:17.470673] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:19.097 [2024-05-15 13:53:17.470686] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:19.097 [2024-05-15 13:53:17.562149] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:19.356 13:53:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:19.356 [2024-05-15 13:53:17.720959] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:19.356 [2024-05-15 13:53:17.721039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62627 ] 00:15:19.356 [2024-05-15 13:53:17.864766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.615 [2024-05-15 13:53:17.953083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.615 [2024-05-15 13:53:18.024405] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:19.615 [2024-05-15 13:53:18.024453] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:19.615 [2024-05-15 13:53:18.024465] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:19.615 [2024-05-15 13:53:18.117817] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:19.874 00:15:19.874 real 0m1.127s 00:15:19.874 user 0m0.661s 00:15:19.874 sys 0m0.258s 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:19.874 ************************************ 00:15:19.874 END TEST dd_flag_directory_forced_aio 00:15:19.874 ************************************ 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:19.874 ************************************ 00:15:19.874 START TEST dd_flag_nofollow_forced_aio 00:15:19.874 ************************************ 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:19.874 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:19.874 [2024-05-15 13:53:18.360561] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:19.874 [2024-05-15 13:53:18.360633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62655 ] 00:15:20.134 [2024-05-15 13:53:18.485255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.134 [2024-05-15 13:53:18.589810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.134 [2024-05-15 13:53:18.657046] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:20.134 [2024-05-15 13:53:18.657095] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:20.134 [2024-05-15 13:53:18.657108] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.393 [2024-05-15 13:53:18.749290] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:20.393 13:53:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:20.393 [2024-05-15 13:53:18.908569] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:20.393 [2024-05-15 13:53:18.908659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62665 ] 00:15:20.653 [2024-05-15 13:53:19.052252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.653 [2024-05-15 13:53:19.147163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.912 [2024-05-15 13:53:19.214797] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:20.912 [2024-05-15 13:53:19.214846] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:20.912 [2024-05-15 13:53:19.214860] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.912 [2024-05-15 13:53:19.306470] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:20.912 13:53:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:21.171 [2024-05-15 13:53:19.482586] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:21.171 [2024-05-15 13:53:19.482662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62672 ] 00:15:21.171 [2024-05-15 13:53:19.622469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.171 [2024-05-15 13:53:19.721683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.691  Copying: 512/512 [B] (average 500 kBps) 00:15:21.691 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ztg84ehg14519aof0g1iorq0msly1y4l0g793eikrabldwdfflu9i8fkuzaz2d048vue4cmhryeu6j34xuq40q5bhb0rah2dlgqreaxdfnmkhhgn58wjrrez5zce5jw8a39y6np92q3rkige43sp6vgffnffys7ttrzj99kubcvs47pihpt39crgsj393snatbg8vch3cgtj5ewqznr4am0qni481fx2w31qajrtcqu4nx2zbivs42du9uryka9dpqv5oyd23sc78ao27bbv9wh3nl4dbznff4tza49hl21tnx6klw2255y78kfo87kpzq8mqjny1ikm5zgqlw3ds15pgk8gqhf5bvico1nwufgle7e47e21b56uhev5z4i7y7h48hwppyz4k7qvbjruy7g1emzuf6hp45z2hma9ik004031iofru46axnqklglnclfegpdur7culp62m13siapnkjuod1p9dzyg6zepm3o711jsxc9zi6m3kglmws4u == \z\t\g\8\4\e\h\g\1\4\5\1\9\a\o\f\0\g\1\i\o\r\q\0\m\s\l\y\1\y\4\l\0\g\7\9\3\e\i\k\r\a\b\l\d\w\d\f\f\l\u\9\i\8\f\k\u\z\a\z\2\d\0\4\8\v\u\e\4\c\m\h\r\y\e\u\6\j\3\4\x\u\q\4\0\q\5\b\h\b\0\r\a\h\2\d\l\g\q\r\e\a\x\d\f\n\m\k\h\h\g\n\5\8\w\j\r\r\e\z\5\z\c\e\5\j\w\8\a\3\9\y\6\n\p\9\2\q\3\r\k\i\g\e\4\3\s\p\6\v\g\f\f\n\f\f\y\s\7\t\t\r\z\j\9\9\k\u\b\c\v\s\4\7\p\i\h\p\t\3\9\c\r\g\s\j\3\9\3\s\n\a\t\b\g\8\v\c\h\3\c\g\t\j\5\e\w\q\z\n\r\4\a\m\0\q\n\i\4\8\1\f\x\2\w\3\1\q\a\j\r\t\c\q\u\4\n\x\2\z\b\i\v\s\4\2\d\u\9\u\r\y\k\a\9\d\p\q\v\5\o\y\d\2\3\s\c\7\8\a\o\2\7\b\b\v\9\w\h\3\n\l\4\d\b\z\n\f\f\4\t\z\a\4\9\h\l\2\1\t\n\x\6\k\l\w\2\2\5\5\y\7\8\k\f\o\8\7\k\p\z\q\8\m\q\j\n\y\1\i\k\m\5\z\g\q\l\w\3\d\s\1\5\p\g\k\8\g\q\h\f\5\b\v\i\c\o\1\n\w\u\f\g\l\e\7\e\4\7\e\2\1\b\5\6\u\h\e\v\5\z\4\i\7\y\7\h\4\8\h\w\p\p\y\z\4\k\7\q\v\b\j\r\u\y\7\g\1\e\m\z\u\f\6\h\p\4\5\z\2\h\m\a\9\i\k\0\0\4\0\3\1\i\o\f\r\u\4\6\a\x\n\q\k\l\g\l\n\c\l\f\e\g\p\d\u\r\7\c\u\l\p\6\2\m\1\3\s\i\a\p\n\k\j\u\o\d\1\p\9\d\z\y\g\6\z\e\p\m\3\o\7\1\1\j\s\x\c\9\z\i\6\m\3\k\g\l\m\w\s\4\u ]] 00:15:21.691 00:15:21.691 real 0m1.710s 00:15:21.691 user 0m0.995s 00:15:21.691 sys 0m0.387s 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:21.691 ************************************ 00:15:21.691 END TEST dd_flag_nofollow_forced_aio 00:15:21.691 ************************************ 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:21.691 ************************************ 00:15:21.691 START TEST dd_flag_noatime_forced_aio 00:15:21.691 ************************************ 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715781199 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715781200 00:15:21.691 13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:15:22.627 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:22.627 [2024-05-15 13:53:21.155473] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:22.627 [2024-05-15 13:53:21.155538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:15:22.886 [2024-05-15 13:53:21.296941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.886 [2024-05-15 13:53:21.381590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.144  Copying: 512/512 [B] (average 500 kBps) 00:15:23.144 00:15:23.144 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:23.144 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715781199 )) 00:15:23.144 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:23.144 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715781200 )) 00:15:23.144 13:53:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:23.404 [2024-05-15 13:53:21.725375] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:23.404 [2024-05-15 13:53:21.725439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62730 ] 00:15:23.404 [2024-05-15 13:53:21.867074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.404 [2024-05-15 13:53:21.947084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.921  Copying: 512/512 [B] (average 500 kBps) 00:15:23.921 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715781202 )) 00:15:23.921 00:15:23.921 real 0m2.164s 00:15:23.921 user 0m0.638s 00:15:23.921 sys 0m0.286s 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:23.921 ************************************ 00:15:23.921 END TEST dd_flag_noatime_forced_aio 00:15:23.921 ************************************ 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:23.921 ************************************ 00:15:23.921 START TEST dd_flags_misc_forced_aio 00:15:23.921 ************************************ 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:23.921 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:23.921 [2024-05-15 13:53:22.365058] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:23.921 [2024-05-15 13:53:22.365231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62756 ] 00:15:24.180 [2024-05-15 13:53:22.507220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.180 [2024-05-15 13:53:22.582109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.439  Copying: 512/512 [B] (average 500 kBps) 00:15:24.439 00:15:24.439 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ x6l7thbzd9wgb7ak7i100zkpuv39ccymfwcw4jilqknumn35i6032pqhhhxjm5p32fj401acd5kjo0n3kpyq6r1oidmj0go4kowh9yipadav9xxd7jxhc466g4qthyy0wb1reyvjit8lfr9gwd8qo0ufw7fqihwgkfp1izxiu9792ggthyluv9qo40no80xilbirdvd6ccrkvs7z4y11elflizw1ou0846eav8yqz5y9g4crxuvf9ttdlivj8zetyo3yj7saolpmzyrg7fe2wexs2q5im0mi3suhrlmv06wvmtscrf8ppdpncxcxfxm27srxwf0oummv9yy8qrq7h0ktpz6a5m92sy3ckvmi7iu924ghx9on69oznetvqnhguis6dl6mmiw9os13s2itzfgse392u67vlv6qlpjuxb5vjrvo6fs97t571gjqqw6unaahayt3i1a1ek91bs1dehc114taz9csne9jdaml19jtib1adx0k26d0bqlqmfua == \x\6\l\7\t\h\b\z\d\9\w\g\b\7\a\k\7\i\1\0\0\z\k\p\u\v\3\9\c\c\y\m\f\w\c\w\4\j\i\l\q\k\n\u\m\n\3\5\i\6\0\3\2\p\q\h\h\h\x\j\m\5\p\3\2\f\j\4\0\1\a\c\d\5\k\j\o\0\n\3\k\p\y\q\6\r\1\o\i\d\m\j\0\g\o\4\k\o\w\h\9\y\i\p\a\d\a\v\9\x\x\d\7\j\x\h\c\4\6\6\g\4\q\t\h\y\y\0\w\b\1\r\e\y\v\j\i\t\8\l\f\r\9\g\w\d\8\q\o\0\u\f\w\7\f\q\i\h\w\g\k\f\p\1\i\z\x\i\u\9\7\9\2\g\g\t\h\y\l\u\v\9\q\o\4\0\n\o\8\0\x\i\l\b\i\r\d\v\d\6\c\c\r\k\v\s\7\z\4\y\1\1\e\l\f\l\i\z\w\1\o\u\0\8\4\6\e\a\v\8\y\q\z\5\y\9\g\4\c\r\x\u\v\f\9\t\t\d\l\i\v\j\8\z\e\t\y\o\3\y\j\7\s\a\o\l\p\m\z\y\r\g\7\f\e\2\w\e\x\s\2\q\5\i\m\0\m\i\3\s\u\h\r\l\m\v\0\6\w\v\m\t\s\c\r\f\8\p\p\d\p\n\c\x\c\x\f\x\m\2\7\s\r\x\w\f\0\o\u\m\m\v\9\y\y\8\q\r\q\7\h\0\k\t\p\z\6\a\5\m\9\2\s\y\3\c\k\v\m\i\7\i\u\9\2\4\g\h\x\9\o\n\6\9\o\z\n\e\t\v\q\n\h\g\u\i\s\6\d\l\6\m\m\i\w\9\o\s\1\3\s\2\i\t\z\f\g\s\e\3\9\2\u\6\7\v\l\v\6\q\l\p\j\u\x\b\5\v\j\r\v\o\6\f\s\9\7\t\5\7\1\g\j\q\q\w\6\u\n\a\a\h\a\y\t\3\i\1\a\1\e\k\9\1\b\s\1\d\e\h\c\1\1\4\t\a\z\9\c\s\n\e\9\j\d\a\m\l\1\9\j\t\i\b\1\a\d\x\0\k\2\6\d\0\b\q\l\q\m\f\u\a ]] 00:15:24.439 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:24.439 13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:24.439 [2024-05-15 13:53:22.901573] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:24.440 [2024-05-15 13:53:22.901752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62764 ] 00:15:24.699 [2024-05-15 13:53:23.041716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.699 [2024-05-15 13:53:23.131165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.958  Copying: 512/512 [B] (average 500 kBps) 00:15:24.958 00:15:24.958 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ x6l7thbzd9wgb7ak7i100zkpuv39ccymfwcw4jilqknumn35i6032pqhhhxjm5p32fj401acd5kjo0n3kpyq6r1oidmj0go4kowh9yipadav9xxd7jxhc466g4qthyy0wb1reyvjit8lfr9gwd8qo0ufw7fqihwgkfp1izxiu9792ggthyluv9qo40no80xilbirdvd6ccrkvs7z4y11elflizw1ou0846eav8yqz5y9g4crxuvf9ttdlivj8zetyo3yj7saolpmzyrg7fe2wexs2q5im0mi3suhrlmv06wvmtscrf8ppdpncxcxfxm27srxwf0oummv9yy8qrq7h0ktpz6a5m92sy3ckvmi7iu924ghx9on69oznetvqnhguis6dl6mmiw9os13s2itzfgse392u67vlv6qlpjuxb5vjrvo6fs97t571gjqqw6unaahayt3i1a1ek91bs1dehc114taz9csne9jdaml19jtib1adx0k26d0bqlqmfua == \x\6\l\7\t\h\b\z\d\9\w\g\b\7\a\k\7\i\1\0\0\z\k\p\u\v\3\9\c\c\y\m\f\w\c\w\4\j\i\l\q\k\n\u\m\n\3\5\i\6\0\3\2\p\q\h\h\h\x\j\m\5\p\3\2\f\j\4\0\1\a\c\d\5\k\j\o\0\n\3\k\p\y\q\6\r\1\o\i\d\m\j\0\g\o\4\k\o\w\h\9\y\i\p\a\d\a\v\9\x\x\d\7\j\x\h\c\4\6\6\g\4\q\t\h\y\y\0\w\b\1\r\e\y\v\j\i\t\8\l\f\r\9\g\w\d\8\q\o\0\u\f\w\7\f\q\i\h\w\g\k\f\p\1\i\z\x\i\u\9\7\9\2\g\g\t\h\y\l\u\v\9\q\o\4\0\n\o\8\0\x\i\l\b\i\r\d\v\d\6\c\c\r\k\v\s\7\z\4\y\1\1\e\l\f\l\i\z\w\1\o\u\0\8\4\6\e\a\v\8\y\q\z\5\y\9\g\4\c\r\x\u\v\f\9\t\t\d\l\i\v\j\8\z\e\t\y\o\3\y\j\7\s\a\o\l\p\m\z\y\r\g\7\f\e\2\w\e\x\s\2\q\5\i\m\0\m\i\3\s\u\h\r\l\m\v\0\6\w\v\m\t\s\c\r\f\8\p\p\d\p\n\c\x\c\x\f\x\m\2\7\s\r\x\w\f\0\o\u\m\m\v\9\y\y\8\q\r\q\7\h\0\k\t\p\z\6\a\5\m\9\2\s\y\3\c\k\v\m\i\7\i\u\9\2\4\g\h\x\9\o\n\6\9\o\z\n\e\t\v\q\n\h\g\u\i\s\6\d\l\6\m\m\i\w\9\o\s\1\3\s\2\i\t\z\f\g\s\e\3\9\2\u\6\7\v\l\v\6\q\l\p\j\u\x\b\5\v\j\r\v\o\6\f\s\9\7\t\5\7\1\g\j\q\q\w\6\u\n\a\a\h\a\y\t\3\i\1\a\1\e\k\9\1\b\s\1\d\e\h\c\1\1\4\t\a\z\9\c\s\n\e\9\j\d\a\m\l\1\9\j\t\i\b\1\a\d\x\0\k\2\6\d\0\b\q\l\q\m\f\u\a ]] 00:15:24.958 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:24.958 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:24.958 [2024-05-15 13:53:23.448962] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:24.958 [2024-05-15 13:53:23.449033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:15:25.224 [2024-05-15 13:53:23.586791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.224 [2024-05-15 13:53:23.687327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.482  Copying: 512/512 [B] (average 250 kBps) 00:15:25.482 00:15:25.482 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ x6l7thbzd9wgb7ak7i100zkpuv39ccymfwcw4jilqknumn35i6032pqhhhxjm5p32fj401acd5kjo0n3kpyq6r1oidmj0go4kowh9yipadav9xxd7jxhc466g4qthyy0wb1reyvjit8lfr9gwd8qo0ufw7fqihwgkfp1izxiu9792ggthyluv9qo40no80xilbirdvd6ccrkvs7z4y11elflizw1ou0846eav8yqz5y9g4crxuvf9ttdlivj8zetyo3yj7saolpmzyrg7fe2wexs2q5im0mi3suhrlmv06wvmtscrf8ppdpncxcxfxm27srxwf0oummv9yy8qrq7h0ktpz6a5m92sy3ckvmi7iu924ghx9on69oznetvqnhguis6dl6mmiw9os13s2itzfgse392u67vlv6qlpjuxb5vjrvo6fs97t571gjqqw6unaahayt3i1a1ek91bs1dehc114taz9csne9jdaml19jtib1adx0k26d0bqlqmfua == \x\6\l\7\t\h\b\z\d\9\w\g\b\7\a\k\7\i\1\0\0\z\k\p\u\v\3\9\c\c\y\m\f\w\c\w\4\j\i\l\q\k\n\u\m\n\3\5\i\6\0\3\2\p\q\h\h\h\x\j\m\5\p\3\2\f\j\4\0\1\a\c\d\5\k\j\o\0\n\3\k\p\y\q\6\r\1\o\i\d\m\j\0\g\o\4\k\o\w\h\9\y\i\p\a\d\a\v\9\x\x\d\7\j\x\h\c\4\6\6\g\4\q\t\h\y\y\0\w\b\1\r\e\y\v\j\i\t\8\l\f\r\9\g\w\d\8\q\o\0\u\f\w\7\f\q\i\h\w\g\k\f\p\1\i\z\x\i\u\9\7\9\2\g\g\t\h\y\l\u\v\9\q\o\4\0\n\o\8\0\x\i\l\b\i\r\d\v\d\6\c\c\r\k\v\s\7\z\4\y\1\1\e\l\f\l\i\z\w\1\o\u\0\8\4\6\e\a\v\8\y\q\z\5\y\9\g\4\c\r\x\u\v\f\9\t\t\d\l\i\v\j\8\z\e\t\y\o\3\y\j\7\s\a\o\l\p\m\z\y\r\g\7\f\e\2\w\e\x\s\2\q\5\i\m\0\m\i\3\s\u\h\r\l\m\v\0\6\w\v\m\t\s\c\r\f\8\p\p\d\p\n\c\x\c\x\f\x\m\2\7\s\r\x\w\f\0\o\u\m\m\v\9\y\y\8\q\r\q\7\h\0\k\t\p\z\6\a\5\m\9\2\s\y\3\c\k\v\m\i\7\i\u\9\2\4\g\h\x\9\o\n\6\9\o\z\n\e\t\v\q\n\h\g\u\i\s\6\d\l\6\m\m\i\w\9\o\s\1\3\s\2\i\t\z\f\g\s\e\3\9\2\u\6\7\v\l\v\6\q\l\p\j\u\x\b\5\v\j\r\v\o\6\f\s\9\7\t\5\7\1\g\j\q\q\w\6\u\n\a\a\h\a\y\t\3\i\1\a\1\e\k\9\1\b\s\1\d\e\h\c\1\1\4\t\a\z\9\c\s\n\e\9\j\d\a\m\l\1\9\j\t\i\b\1\a\d\x\0\k\2\6\d\0\b\q\l\q\m\f\u\a ]] 00:15:25.482 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:25.483 13:53:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:25.483 [2024-05-15 13:53:24.010923] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:25.483 [2024-05-15 13:53:24.010993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62779 ] 00:15:25.741 [2024-05-15 13:53:24.151657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.741 [2024-05-15 13:53:24.255259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.000  Copying: 512/512 [B] (average 250 kBps) 00:15:26.000 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ x6l7thbzd9wgb7ak7i100zkpuv39ccymfwcw4jilqknumn35i6032pqhhhxjm5p32fj401acd5kjo0n3kpyq6r1oidmj0go4kowh9yipadav9xxd7jxhc466g4qthyy0wb1reyvjit8lfr9gwd8qo0ufw7fqihwgkfp1izxiu9792ggthyluv9qo40no80xilbirdvd6ccrkvs7z4y11elflizw1ou0846eav8yqz5y9g4crxuvf9ttdlivj8zetyo3yj7saolpmzyrg7fe2wexs2q5im0mi3suhrlmv06wvmtscrf8ppdpncxcxfxm27srxwf0oummv9yy8qrq7h0ktpz6a5m92sy3ckvmi7iu924ghx9on69oznetvqnhguis6dl6mmiw9os13s2itzfgse392u67vlv6qlpjuxb5vjrvo6fs97t571gjqqw6unaahayt3i1a1ek91bs1dehc114taz9csne9jdaml19jtib1adx0k26d0bqlqmfua == \x\6\l\7\t\h\b\z\d\9\w\g\b\7\a\k\7\i\1\0\0\z\k\p\u\v\3\9\c\c\y\m\f\w\c\w\4\j\i\l\q\k\n\u\m\n\3\5\i\6\0\3\2\p\q\h\h\h\x\j\m\5\p\3\2\f\j\4\0\1\a\c\d\5\k\j\o\0\n\3\k\p\y\q\6\r\1\o\i\d\m\j\0\g\o\4\k\o\w\h\9\y\i\p\a\d\a\v\9\x\x\d\7\j\x\h\c\4\6\6\g\4\q\t\h\y\y\0\w\b\1\r\e\y\v\j\i\t\8\l\f\r\9\g\w\d\8\q\o\0\u\f\w\7\f\q\i\h\w\g\k\f\p\1\i\z\x\i\u\9\7\9\2\g\g\t\h\y\l\u\v\9\q\o\4\0\n\o\8\0\x\i\l\b\i\r\d\v\d\6\c\c\r\k\v\s\7\z\4\y\1\1\e\l\f\l\i\z\w\1\o\u\0\8\4\6\e\a\v\8\y\q\z\5\y\9\g\4\c\r\x\u\v\f\9\t\t\d\l\i\v\j\8\z\e\t\y\o\3\y\j\7\s\a\o\l\p\m\z\y\r\g\7\f\e\2\w\e\x\s\2\q\5\i\m\0\m\i\3\s\u\h\r\l\m\v\0\6\w\v\m\t\s\c\r\f\8\p\p\d\p\n\c\x\c\x\f\x\m\2\7\s\r\x\w\f\0\o\u\m\m\v\9\y\y\8\q\r\q\7\h\0\k\t\p\z\6\a\5\m\9\2\s\y\3\c\k\v\m\i\7\i\u\9\2\4\g\h\x\9\o\n\6\9\o\z\n\e\t\v\q\n\h\g\u\i\s\6\d\l\6\m\m\i\w\9\o\s\1\3\s\2\i\t\z\f\g\s\e\3\9\2\u\6\7\v\l\v\6\q\l\p\j\u\x\b\5\v\j\r\v\o\6\f\s\9\7\t\5\7\1\g\j\q\q\w\6\u\n\a\a\h\a\y\t\3\i\1\a\1\e\k\9\1\b\s\1\d\e\h\c\1\1\4\t\a\z\9\c\s\n\e\9\j\d\a\m\l\1\9\j\t\i\b\1\a\d\x\0\k\2\6\d\0\b\q\l\q\m\f\u\a ]] 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:26.000 13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:26.259 [2024-05-15 13:53:24.608818] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:26.259 [2024-05-15 13:53:24.608932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62788 ] 00:15:26.259 [2024-05-15 13:53:24.753525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.517 [2024-05-15 13:53:24.853683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.776  Copying: 512/512 [B] (average 500 kBps) 00:15:26.776 00:15:26.776 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qg3hvdhsssvrlo9f4oz3rzxfa7frreakhz98cfgtkx22k4wlvdvc64i2piygugbtl9w7dgxmfkmq5s7c1fccmtanyi2703rlwiys54ruplqv6jnd0bkkbs6dww4pmc0oe4xp96qa3f3qlhmh3isd24tma87nsc3gk3nfbw56ybaeod7qxut8v89to9mpfxj09djckuv6u34ya0au5h2utieupg7y2bcy763ixvxsm0jpeone8xgs1sc7m3bm4qkvnll5e1pordrcdpubfe3vyspz2udl2ej4xtxq9sg8jzd4dzvbtay673gteawb2xssd3mi8vr8kskiss83bypgshk7pfno13g5upbmmepjkdljov104e2pzdrx7cctwsacm046kcrmvxcoukznqilb4y5hgbajihsmn9w4vtbwg5yret02my8uc637u3t33yw3t802k2i8gi3g2l1gp650qdgd6xxzetaxiy19pz5nbtoi6t2mvzn7r9vc14mfptsm == \q\g\3\h\v\d\h\s\s\s\v\r\l\o\9\f\4\o\z\3\r\z\x\f\a\7\f\r\r\e\a\k\h\z\9\8\c\f\g\t\k\x\2\2\k\4\w\l\v\d\v\c\6\4\i\2\p\i\y\g\u\g\b\t\l\9\w\7\d\g\x\m\f\k\m\q\5\s\7\c\1\f\c\c\m\t\a\n\y\i\2\7\0\3\r\l\w\i\y\s\5\4\r\u\p\l\q\v\6\j\n\d\0\b\k\k\b\s\6\d\w\w\4\p\m\c\0\o\e\4\x\p\9\6\q\a\3\f\3\q\l\h\m\h\3\i\s\d\2\4\t\m\a\8\7\n\s\c\3\g\k\3\n\f\b\w\5\6\y\b\a\e\o\d\7\q\x\u\t\8\v\8\9\t\o\9\m\p\f\x\j\0\9\d\j\c\k\u\v\6\u\3\4\y\a\0\a\u\5\h\2\u\t\i\e\u\p\g\7\y\2\b\c\y\7\6\3\i\x\v\x\s\m\0\j\p\e\o\n\e\8\x\g\s\1\s\c\7\m\3\b\m\4\q\k\v\n\l\l\5\e\1\p\o\r\d\r\c\d\p\u\b\f\e\3\v\y\s\p\z\2\u\d\l\2\e\j\4\x\t\x\q\9\s\g\8\j\z\d\4\d\z\v\b\t\a\y\6\7\3\g\t\e\a\w\b\2\x\s\s\d\3\m\i\8\v\r\8\k\s\k\i\s\s\8\3\b\y\p\g\s\h\k\7\p\f\n\o\1\3\g\5\u\p\b\m\m\e\p\j\k\d\l\j\o\v\1\0\4\e\2\p\z\d\r\x\7\c\c\t\w\s\a\c\m\0\4\6\k\c\r\m\v\x\c\o\u\k\z\n\q\i\l\b\4\y\5\h\g\b\a\j\i\h\s\m\n\9\w\4\v\t\b\w\g\5\y\r\e\t\0\2\m\y\8\u\c\6\3\7\u\3\t\3\3\y\w\3\t\8\0\2\k\2\i\8\g\i\3\g\2\l\1\g\p\6\5\0\q\d\g\d\6\x\x\z\e\t\a\x\i\y\1\9\p\z\5\n\b\t\o\i\6\t\2\m\v\z\n\7\r\9\v\c\1\4\m\f\p\t\s\m ]] 00:15:26.776 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:26.776 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:26.776 [2024-05-15 13:53:25.191131] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:26.776 [2024-05-15 13:53:25.191198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62796 ] 00:15:26.776 [2024-05-15 13:53:25.332819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.035 [2024-05-15 13:53:25.429327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.294  Copying: 512/512 [B] (average 500 kBps) 00:15:27.294 00:15:27.294 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qg3hvdhsssvrlo9f4oz3rzxfa7frreakhz98cfgtkx22k4wlvdvc64i2piygugbtl9w7dgxmfkmq5s7c1fccmtanyi2703rlwiys54ruplqv6jnd0bkkbs6dww4pmc0oe4xp96qa3f3qlhmh3isd24tma87nsc3gk3nfbw56ybaeod7qxut8v89to9mpfxj09djckuv6u34ya0au5h2utieupg7y2bcy763ixvxsm0jpeone8xgs1sc7m3bm4qkvnll5e1pordrcdpubfe3vyspz2udl2ej4xtxq9sg8jzd4dzvbtay673gteawb2xssd3mi8vr8kskiss83bypgshk7pfno13g5upbmmepjkdljov104e2pzdrx7cctwsacm046kcrmvxcoukznqilb4y5hgbajihsmn9w4vtbwg5yret02my8uc637u3t33yw3t802k2i8gi3g2l1gp650qdgd6xxzetaxiy19pz5nbtoi6t2mvzn7r9vc14mfptsm == \q\g\3\h\v\d\h\s\s\s\v\r\l\o\9\f\4\o\z\3\r\z\x\f\a\7\f\r\r\e\a\k\h\z\9\8\c\f\g\t\k\x\2\2\k\4\w\l\v\d\v\c\6\4\i\2\p\i\y\g\u\g\b\t\l\9\w\7\d\g\x\m\f\k\m\q\5\s\7\c\1\f\c\c\m\t\a\n\y\i\2\7\0\3\r\l\w\i\y\s\5\4\r\u\p\l\q\v\6\j\n\d\0\b\k\k\b\s\6\d\w\w\4\p\m\c\0\o\e\4\x\p\9\6\q\a\3\f\3\q\l\h\m\h\3\i\s\d\2\4\t\m\a\8\7\n\s\c\3\g\k\3\n\f\b\w\5\6\y\b\a\e\o\d\7\q\x\u\t\8\v\8\9\t\o\9\m\p\f\x\j\0\9\d\j\c\k\u\v\6\u\3\4\y\a\0\a\u\5\h\2\u\t\i\e\u\p\g\7\y\2\b\c\y\7\6\3\i\x\v\x\s\m\0\j\p\e\o\n\e\8\x\g\s\1\s\c\7\m\3\b\m\4\q\k\v\n\l\l\5\e\1\p\o\r\d\r\c\d\p\u\b\f\e\3\v\y\s\p\z\2\u\d\l\2\e\j\4\x\t\x\q\9\s\g\8\j\z\d\4\d\z\v\b\t\a\y\6\7\3\g\t\e\a\w\b\2\x\s\s\d\3\m\i\8\v\r\8\k\s\k\i\s\s\8\3\b\y\p\g\s\h\k\7\p\f\n\o\1\3\g\5\u\p\b\m\m\e\p\j\k\d\l\j\o\v\1\0\4\e\2\p\z\d\r\x\7\c\c\t\w\s\a\c\m\0\4\6\k\c\r\m\v\x\c\o\u\k\z\n\q\i\l\b\4\y\5\h\g\b\a\j\i\h\s\m\n\9\w\4\v\t\b\w\g\5\y\r\e\t\0\2\m\y\8\u\c\6\3\7\u\3\t\3\3\y\w\3\t\8\0\2\k\2\i\8\g\i\3\g\2\l\1\g\p\6\5\0\q\d\g\d\6\x\x\z\e\t\a\x\i\y\1\9\p\z\5\n\b\t\o\i\6\t\2\m\v\z\n\7\r\9\v\c\1\4\m\f\p\t\s\m ]] 00:15:27.294 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:27.294 13:53:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:27.294 [2024-05-15 13:53:25.751756] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:27.295 [2024-05-15 13:53:25.751839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62809 ] 00:15:27.553 [2024-05-15 13:53:25.892709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.553 [2024-05-15 13:53:25.995697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.812  Copying: 512/512 [B] (average 166 kBps) 00:15:27.812 00:15:27.812 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qg3hvdhsssvrlo9f4oz3rzxfa7frreakhz98cfgtkx22k4wlvdvc64i2piygugbtl9w7dgxmfkmq5s7c1fccmtanyi2703rlwiys54ruplqv6jnd0bkkbs6dww4pmc0oe4xp96qa3f3qlhmh3isd24tma87nsc3gk3nfbw56ybaeod7qxut8v89to9mpfxj09djckuv6u34ya0au5h2utieupg7y2bcy763ixvxsm0jpeone8xgs1sc7m3bm4qkvnll5e1pordrcdpubfe3vyspz2udl2ej4xtxq9sg8jzd4dzvbtay673gteawb2xssd3mi8vr8kskiss83bypgshk7pfno13g5upbmmepjkdljov104e2pzdrx7cctwsacm046kcrmvxcoukznqilb4y5hgbajihsmn9w4vtbwg5yret02my8uc637u3t33yw3t802k2i8gi3g2l1gp650qdgd6xxzetaxiy19pz5nbtoi6t2mvzn7r9vc14mfptsm == \q\g\3\h\v\d\h\s\s\s\v\r\l\o\9\f\4\o\z\3\r\z\x\f\a\7\f\r\r\e\a\k\h\z\9\8\c\f\g\t\k\x\2\2\k\4\w\l\v\d\v\c\6\4\i\2\p\i\y\g\u\g\b\t\l\9\w\7\d\g\x\m\f\k\m\q\5\s\7\c\1\f\c\c\m\t\a\n\y\i\2\7\0\3\r\l\w\i\y\s\5\4\r\u\p\l\q\v\6\j\n\d\0\b\k\k\b\s\6\d\w\w\4\p\m\c\0\o\e\4\x\p\9\6\q\a\3\f\3\q\l\h\m\h\3\i\s\d\2\4\t\m\a\8\7\n\s\c\3\g\k\3\n\f\b\w\5\6\y\b\a\e\o\d\7\q\x\u\t\8\v\8\9\t\o\9\m\p\f\x\j\0\9\d\j\c\k\u\v\6\u\3\4\y\a\0\a\u\5\h\2\u\t\i\e\u\p\g\7\y\2\b\c\y\7\6\3\i\x\v\x\s\m\0\j\p\e\o\n\e\8\x\g\s\1\s\c\7\m\3\b\m\4\q\k\v\n\l\l\5\e\1\p\o\r\d\r\c\d\p\u\b\f\e\3\v\y\s\p\z\2\u\d\l\2\e\j\4\x\t\x\q\9\s\g\8\j\z\d\4\d\z\v\b\t\a\y\6\7\3\g\t\e\a\w\b\2\x\s\s\d\3\m\i\8\v\r\8\k\s\k\i\s\s\8\3\b\y\p\g\s\h\k\7\p\f\n\o\1\3\g\5\u\p\b\m\m\e\p\j\k\d\l\j\o\v\1\0\4\e\2\p\z\d\r\x\7\c\c\t\w\s\a\c\m\0\4\6\k\c\r\m\v\x\c\o\u\k\z\n\q\i\l\b\4\y\5\h\g\b\a\j\i\h\s\m\n\9\w\4\v\t\b\w\g\5\y\r\e\t\0\2\m\y\8\u\c\6\3\7\u\3\t\3\3\y\w\3\t\8\0\2\k\2\i\8\g\i\3\g\2\l\1\g\p\6\5\0\q\d\g\d\6\x\x\z\e\t\a\x\i\y\1\9\p\z\5\n\b\t\o\i\6\t\2\m\v\z\n\7\r\9\v\c\1\4\m\f\p\t\s\m ]] 00:15:27.812 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:27.812 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:27.812 [2024-05-15 13:53:26.316800] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:27.812 [2024-05-15 13:53:26.316867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:15:28.070 [2024-05-15 13:53:26.456583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.071 [2024-05-15 13:53:26.549379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.329  Copying: 512/512 [B] (average 250 kBps) 00:15:28.329 00:15:28.329 ************************************ 00:15:28.329 END TEST dd_flags_misc_forced_aio 00:15:28.329 ************************************ 00:15:28.329 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qg3hvdhsssvrlo9f4oz3rzxfa7frreakhz98cfgtkx22k4wlvdvc64i2piygugbtl9w7dgxmfkmq5s7c1fccmtanyi2703rlwiys54ruplqv6jnd0bkkbs6dww4pmc0oe4xp96qa3f3qlhmh3isd24tma87nsc3gk3nfbw56ybaeod7qxut8v89to9mpfxj09djckuv6u34ya0au5h2utieupg7y2bcy763ixvxsm0jpeone8xgs1sc7m3bm4qkvnll5e1pordrcdpubfe3vyspz2udl2ej4xtxq9sg8jzd4dzvbtay673gteawb2xssd3mi8vr8kskiss83bypgshk7pfno13g5upbmmepjkdljov104e2pzdrx7cctwsacm046kcrmvxcoukznqilb4y5hgbajihsmn9w4vtbwg5yret02my8uc637u3t33yw3t802k2i8gi3g2l1gp650qdgd6xxzetaxiy19pz5nbtoi6t2mvzn7r9vc14mfptsm == \q\g\3\h\v\d\h\s\s\s\v\r\l\o\9\f\4\o\z\3\r\z\x\f\a\7\f\r\r\e\a\k\h\z\9\8\c\f\g\t\k\x\2\2\k\4\w\l\v\d\v\c\6\4\i\2\p\i\y\g\u\g\b\t\l\9\w\7\d\g\x\m\f\k\m\q\5\s\7\c\1\f\c\c\m\t\a\n\y\i\2\7\0\3\r\l\w\i\y\s\5\4\r\u\p\l\q\v\6\j\n\d\0\b\k\k\b\s\6\d\w\w\4\p\m\c\0\o\e\4\x\p\9\6\q\a\3\f\3\q\l\h\m\h\3\i\s\d\2\4\t\m\a\8\7\n\s\c\3\g\k\3\n\f\b\w\5\6\y\b\a\e\o\d\7\q\x\u\t\8\v\8\9\t\o\9\m\p\f\x\j\0\9\d\j\c\k\u\v\6\u\3\4\y\a\0\a\u\5\h\2\u\t\i\e\u\p\g\7\y\2\b\c\y\7\6\3\i\x\v\x\s\m\0\j\p\e\o\n\e\8\x\g\s\1\s\c\7\m\3\b\m\4\q\k\v\n\l\l\5\e\1\p\o\r\d\r\c\d\p\u\b\f\e\3\v\y\s\p\z\2\u\d\l\2\e\j\4\x\t\x\q\9\s\g\8\j\z\d\4\d\z\v\b\t\a\y\6\7\3\g\t\e\a\w\b\2\x\s\s\d\3\m\i\8\v\r\8\k\s\k\i\s\s\8\3\b\y\p\g\s\h\k\7\p\f\n\o\1\3\g\5\u\p\b\m\m\e\p\j\k\d\l\j\o\v\1\0\4\e\2\p\z\d\r\x\7\c\c\t\w\s\a\c\m\0\4\6\k\c\r\m\v\x\c\o\u\k\z\n\q\i\l\b\4\y\5\h\g\b\a\j\i\h\s\m\n\9\w\4\v\t\b\w\g\5\y\r\e\t\0\2\m\y\8\u\c\6\3\7\u\3\t\3\3\y\w\3\t\8\0\2\k\2\i\8\g\i\3\g\2\l\1\g\p\6\5\0\q\d\g\d\6\x\x\z\e\t\a\x\i\y\1\9\p\z\5\n\b\t\o\i\6\t\2\m\v\z\n\7\r\9\v\c\1\4\m\f\p\t\s\m ]] 00:15:28.329 00:15:28.329 real 0m4.573s 00:15:28.329 user 0m2.605s 00:15:28.329 sys 0m0.968s 00:15:28.329 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:28.329 13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 13:53:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:15:28.588 13:53:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:28.588 13:53:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:28.588 ************************************ 00:15:28.588 END TEST spdk_dd_posix 00:15:28.588 ************************************ 00:15:28.588 00:15:28.588 real 0m20.844s 00:15:28.588 user 0m10.688s 00:15:28.588 sys 0m5.804s 00:15:28.588 13:53:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:28.588 13:53:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 13:53:26 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:28.588 13:53:26 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:28.588 13:53:26 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:28.588 13:53:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 ************************************ 00:15:28.588 START TEST spdk_dd_malloc 00:15:28.588 ************************************ 00:15:28.588 13:53:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:28.588 * Looking for test storage... 00:15:28.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 ************************************ 00:15:28.588 START TEST dd_malloc_copy 00:15:28.588 ************************************ 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:28.588 13:53:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 [2024-05-15 13:53:27.129334] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:28.589 [2024-05-15 13:53:27.129414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62885 ] 00:15:28.846 { 00:15:28.846 "subsystems": [ 00:15:28.846 { 00:15:28.846 "subsystem": "bdev", 00:15:28.846 "config": [ 00:15:28.846 { 00:15:28.846 "params": { 00:15:28.846 "block_size": 512, 00:15:28.846 "num_blocks": 1048576, 00:15:28.846 "name": "malloc0" 00:15:28.846 }, 00:15:28.846 "method": "bdev_malloc_create" 00:15:28.846 }, 00:15:28.846 { 00:15:28.846 "params": { 00:15:28.846 "block_size": 512, 00:15:28.846 "num_blocks": 1048576, 00:15:28.846 "name": "malloc1" 00:15:28.846 }, 00:15:28.846 "method": "bdev_malloc_create" 00:15:28.846 }, 00:15:28.846 { 00:15:28.846 "method": "bdev_wait_for_examine" 00:15:28.846 } 00:15:28.846 ] 00:15:28.846 } 00:15:28.846 ] 00:15:28.846 } 00:15:28.846 [2024-05-15 13:53:27.274440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.846 [2024-05-15 13:53:27.368294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.769  Copying: 264/512 [MB] (264 MBps) Copying: 512/512 [MB] (average 265 MBps) 00:15:31.769 00:15:31.769 13:53:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:15:31.769 13:53:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:15:31.769 13:53:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:31.769 13:53:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:31.769 [2024-05-15 13:53:30.154073] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:31.769 [2024-05-15 13:53:30.154291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62927 ] 00:15:31.769 { 00:15:31.769 "subsystems": [ 00:15:31.769 { 00:15:31.769 "subsystem": "bdev", 00:15:31.769 "config": [ 00:15:31.769 { 00:15:31.769 "params": { 00:15:31.769 "block_size": 512, 00:15:31.769 "num_blocks": 1048576, 00:15:31.769 "name": "malloc0" 00:15:31.769 }, 00:15:31.769 "method": "bdev_malloc_create" 00:15:31.769 }, 00:15:31.769 { 00:15:31.769 "params": { 00:15:31.769 "block_size": 512, 00:15:31.769 "num_blocks": 1048576, 00:15:31.769 "name": "malloc1" 00:15:31.769 }, 00:15:31.769 "method": "bdev_malloc_create" 00:15:31.769 }, 00:15:31.769 { 00:15:31.769 "method": "bdev_wait_for_examine" 00:15:31.769 } 00:15:31.769 ] 00:15:31.769 } 00:15:31.769 ] 00:15:31.769 } 00:15:31.769 [2024-05-15 13:53:30.295849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.028 [2024-05-15 13:53:30.392320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.604  Copying: 265/512 [MB] (265 MBps) Copying: 512/512 [MB] (average 265 MBps) 00:15:34.604 00:15:34.604 ************************************ 00:15:34.605 END TEST dd_malloc_copy 00:15:34.605 ************************************ 00:15:34.605 00:15:34.605 real 0m6.065s 00:15:34.605 user 0m5.251s 00:15:34.605 sys 0m0.670s 00:15:34.605 13:53:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:34.605 13:53:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:34.864 ************************************ 00:15:34.864 END TEST spdk_dd_malloc 00:15:34.864 ************************************ 00:15:34.864 00:15:34.864 real 0m6.221s 00:15:34.864 user 0m5.300s 00:15:34.864 sys 0m0.780s 00:15:34.864 13:53:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:34.864 13:53:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:34.864 13:53:33 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:34.864 13:53:33 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:34.864 13:53:33 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:34.864 13:53:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:34.864 ************************************ 00:15:34.864 START TEST spdk_dd_bdev_to_bdev 00:15:34.864 ************************************ 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:34.864 * Looking for test storage... 00:15:34.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:15:34.864 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:15:34.865 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:34.865 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:15:34.865 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:34.865 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:34.865 ************************************ 00:15:34.865 START TEST dd_inflate_file 00:15:34.865 ************************************ 00:15:34.865 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:35.122 [2024-05-15 13:53:33.441931] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:35.122 [2024-05-15 13:53:33.442002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63026 ] 00:15:35.122 [2024-05-15 13:53:33.583309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.122 [2024-05-15 13:53:33.671285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.638  Copying: 64/64 [MB] (average 1454 MBps) 00:15:35.638 00:15:35.638 00:15:35.638 real 0m0.588s 00:15:35.638 user 0m0.366s 00:15:35.638 sys 0m0.274s 00:15:35.638 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:35.638 13:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:15:35.638 ************************************ 00:15:35.638 END TEST dd_inflate_file 00:15:35.638 ************************************ 00:15:35.638 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:15:35.638 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:35.639 ************************************ 00:15:35.639 START TEST dd_copy_to_out_bdev 00:15:35.639 ************************************ 00:15:35.639 13:53:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:35.639 [2024-05-15 13:53:34.096030] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:35.639 [2024-05-15 13:53:34.096236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63064 ] 00:15:35.639 { 00:15:35.639 "subsystems": [ 00:15:35.639 { 00:15:35.639 "subsystem": "bdev", 00:15:35.639 "config": [ 00:15:35.639 { 00:15:35.639 "params": { 00:15:35.639 "trtype": "pcie", 00:15:35.639 "traddr": "0000:00:10.0", 00:15:35.639 "name": "Nvme0" 00:15:35.639 }, 00:15:35.639 "method": "bdev_nvme_attach_controller" 00:15:35.639 }, 00:15:35.639 { 00:15:35.639 "params": { 00:15:35.639 "trtype": "pcie", 00:15:35.639 "traddr": "0000:00:11.0", 00:15:35.639 "name": "Nvme1" 00:15:35.639 }, 00:15:35.639 "method": "bdev_nvme_attach_controller" 00:15:35.639 }, 00:15:35.639 { 00:15:35.639 "method": "bdev_wait_for_examine" 00:15:35.639 } 00:15:35.639 ] 00:15:35.639 } 00:15:35.639 ] 00:15:35.639 } 00:15:35.897 [2024-05-15 13:53:34.237611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.897 [2024-05-15 13:53:34.335835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.272  Copying: 64/64 [MB] (average 64 MBps) 00:15:37.272 00:15:37.272 00:15:37.272 real 0m1.723s 00:15:37.272 user 0m1.519s 00:15:37.272 sys 0m1.299s 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:37.272 ************************************ 00:15:37.272 END TEST dd_copy_to_out_bdev 00:15:37.272 ************************************ 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:37.272 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:37.532 ************************************ 00:15:37.532 START TEST dd_offset_magic 00:15:37.532 ************************************ 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:37.532 13:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:37.532 [2024-05-15 13:53:35.894303] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:37.532 [2024-05-15 13:53:35.894484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63099 ] 00:15:37.532 { 00:15:37.532 "subsystems": [ 00:15:37.532 { 00:15:37.532 "subsystem": "bdev", 00:15:37.532 "config": [ 00:15:37.532 { 00:15:37.532 "params": { 00:15:37.532 "trtype": "pcie", 00:15:37.532 "traddr": "0000:00:10.0", 00:15:37.532 "name": "Nvme0" 00:15:37.532 }, 00:15:37.532 "method": "bdev_nvme_attach_controller" 00:15:37.532 }, 00:15:37.532 { 00:15:37.532 "params": { 00:15:37.532 "trtype": "pcie", 00:15:37.532 "traddr": "0000:00:11.0", 00:15:37.532 "name": "Nvme1" 00:15:37.532 }, 00:15:37.532 "method": "bdev_nvme_attach_controller" 00:15:37.532 }, 00:15:37.532 { 00:15:37.532 "method": "bdev_wait_for_examine" 00:15:37.532 } 00:15:37.532 ] 00:15:37.532 } 00:15:37.532 ] 00:15:37.532 } 00:15:37.532 [2024-05-15 13:53:36.028481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.791 [2024-05-15 13:53:36.120102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.310  Copying: 65/65 [MB] (average 738 MBps) 00:15:38.310 00:15:38.310 13:53:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:15:38.310 13:53:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:15:38.310 13:53:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:38.310 13:53:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:38.310 [2024-05-15 13:53:36.708041] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:38.310 [2024-05-15 13:53:36.708107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63119 ] 00:15:38.310 { 00:15:38.310 "subsystems": [ 00:15:38.310 { 00:15:38.310 "subsystem": "bdev", 00:15:38.310 "config": [ 00:15:38.310 { 00:15:38.310 "params": { 00:15:38.310 "trtype": "pcie", 00:15:38.310 "traddr": "0000:00:10.0", 00:15:38.310 "name": "Nvme0" 00:15:38.310 }, 00:15:38.310 "method": "bdev_nvme_attach_controller" 00:15:38.310 }, 00:15:38.310 { 00:15:38.310 "params": { 00:15:38.310 "trtype": "pcie", 00:15:38.310 "traddr": "0000:00:11.0", 00:15:38.310 "name": "Nvme1" 00:15:38.310 }, 00:15:38.310 "method": "bdev_nvme_attach_controller" 00:15:38.310 }, 00:15:38.310 { 00:15:38.310 "method": "bdev_wait_for_examine" 00:15:38.310 } 00:15:38.310 ] 00:15:38.310 } 00:15:38.310 ] 00:15:38.310 } 00:15:38.310 [2024-05-15 13:53:36.848995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.569 [2024-05-15 13:53:36.944626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.827  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:38.827 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:38.827 13:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:39.085 [2024-05-15 13:53:37.403076] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:39.085 [2024-05-15 13:53:37.403147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:15:39.085 { 00:15:39.085 "subsystems": [ 00:15:39.085 { 00:15:39.085 "subsystem": "bdev", 00:15:39.085 "config": [ 00:15:39.085 { 00:15:39.085 "params": { 00:15:39.085 "trtype": "pcie", 00:15:39.085 "traddr": "0000:00:10.0", 00:15:39.085 "name": "Nvme0" 00:15:39.085 }, 00:15:39.085 "method": "bdev_nvme_attach_controller" 00:15:39.085 }, 00:15:39.085 { 00:15:39.085 "params": { 00:15:39.085 "trtype": "pcie", 00:15:39.085 "traddr": "0000:00:11.0", 00:15:39.085 "name": "Nvme1" 00:15:39.085 }, 00:15:39.085 "method": "bdev_nvme_attach_controller" 00:15:39.085 }, 00:15:39.085 { 00:15:39.085 "method": "bdev_wait_for_examine" 00:15:39.085 } 00:15:39.085 ] 00:15:39.085 } 00:15:39.085 ] 00:15:39.085 } 00:15:39.085 [2024-05-15 13:53:37.544573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.085 [2024-05-15 13:53:37.639815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.654  Copying: 65/65 [MB] (average 812 MBps) 00:15:39.654 00:15:39.654 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:15:39.654 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:15:39.654 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:39.654 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:39.914 [2024-05-15 13:53:38.220926] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:39.914 [2024-05-15 13:53:38.220996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:15:39.914 { 00:15:39.914 "subsystems": [ 00:15:39.914 { 00:15:39.914 "subsystem": "bdev", 00:15:39.914 "config": [ 00:15:39.914 { 00:15:39.914 "params": { 00:15:39.914 "trtype": "pcie", 00:15:39.914 "traddr": "0000:00:10.0", 00:15:39.914 "name": "Nvme0" 00:15:39.914 }, 00:15:39.914 "method": "bdev_nvme_attach_controller" 00:15:39.914 }, 00:15:39.914 { 00:15:39.914 "params": { 00:15:39.914 "trtype": "pcie", 00:15:39.914 "traddr": "0000:00:11.0", 00:15:39.914 "name": "Nvme1" 00:15:39.914 }, 00:15:39.914 "method": "bdev_nvme_attach_controller" 00:15:39.914 }, 00:15:39.914 { 00:15:39.914 "method": "bdev_wait_for_examine" 00:15:39.914 } 00:15:39.914 ] 00:15:39.914 } 00:15:39.914 ] 00:15:39.914 } 00:15:39.914 [2024-05-15 13:53:38.360496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.914 [2024-05-15 13:53:38.458861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.431  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:40.431 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:15:40.431 00:15:40.431 real 0m3.034s 00:15:40.431 user 0m2.258s 00:15:40.431 sys 0m0.806s 00:15:40.431 ************************************ 00:15:40.431 END TEST dd_offset_magic 00:15:40.431 ************************************ 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:40.431 13:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 [2024-05-15 13:53:38.980336] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:40.431 [2024-05-15 13:53:38.980404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63187 ] 00:15:40.690 { 00:15:40.690 "subsystems": [ 00:15:40.690 { 00:15:40.690 "subsystem": "bdev", 00:15:40.690 "config": [ 00:15:40.690 { 00:15:40.690 "params": { 00:15:40.690 "trtype": "pcie", 00:15:40.690 "traddr": "0000:00:10.0", 00:15:40.690 "name": "Nvme0" 00:15:40.690 }, 00:15:40.690 "method": "bdev_nvme_attach_controller" 00:15:40.690 }, 00:15:40.690 { 00:15:40.690 "params": { 00:15:40.690 "trtype": "pcie", 00:15:40.690 "traddr": "0000:00:11.0", 00:15:40.690 "name": "Nvme1" 00:15:40.690 }, 00:15:40.690 "method": "bdev_nvme_attach_controller" 00:15:40.690 }, 00:15:40.690 { 00:15:40.690 "method": "bdev_wait_for_examine" 00:15:40.690 } 00:15:40.690 ] 00:15:40.690 } 00:15:40.690 ] 00:15:40.690 } 00:15:40.690 [2024-05-15 13:53:39.120377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.690 [2024-05-15 13:53:39.215993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.207  Copying: 5120/5120 [kB] (average 1000 MBps) 00:15:41.207 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:41.207 13:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:41.207 [2024-05-15 13:53:39.679716] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:41.207 [2024-05-15 13:53:39.679794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63208 ] 00:15:41.207 { 00:15:41.207 "subsystems": [ 00:15:41.207 { 00:15:41.207 "subsystem": "bdev", 00:15:41.207 "config": [ 00:15:41.207 { 00:15:41.207 "params": { 00:15:41.207 "trtype": "pcie", 00:15:41.207 "traddr": "0000:00:10.0", 00:15:41.207 "name": "Nvme0" 00:15:41.207 }, 00:15:41.207 "method": "bdev_nvme_attach_controller" 00:15:41.207 }, 00:15:41.207 { 00:15:41.207 "params": { 00:15:41.207 "trtype": "pcie", 00:15:41.207 "traddr": "0000:00:11.0", 00:15:41.207 "name": "Nvme1" 00:15:41.207 }, 00:15:41.207 "method": "bdev_nvme_attach_controller" 00:15:41.207 }, 00:15:41.207 { 00:15:41.207 "method": "bdev_wait_for_examine" 00:15:41.207 } 00:15:41.207 ] 00:15:41.207 } 00:15:41.207 ] 00:15:41.207 } 00:15:41.466 [2024-05-15 13:53:39.821020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.466 [2024-05-15 13:53:39.924965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.983  Copying: 5120/5120 [kB] (average 833 MBps) 00:15:41.983 00:15:41.983 13:53:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:15:41.983 00:15:41.983 real 0m7.125s 00:15:41.983 user 0m5.276s 00:15:41.983 sys 0m3.105s 00:15:41.983 13:53:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.983 13:53:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 ************************************ 00:15:41.983 END TEST spdk_dd_bdev_to_bdev 00:15:41.983 ************************************ 00:15:41.983 13:53:40 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:15:41.983 13:53:40 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:15:41.983 13:53:40 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:41.983 13:53:40 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.983 13:53:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 ************************************ 00:15:41.983 START TEST spdk_dd_uring 00:15:41.983 ************************************ 00:15:41.983 13:53:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:15:42.243 * Looking for test storage... 00:15:42.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:15:42.243 ************************************ 00:15:42.243 START TEST dd_uring_copy 00:15:42.243 ************************************ 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1121 -- # uring_zram_copy 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=7zun6ix1p9nuceeqvoqtxo5juz8qhnhvi7x7t0fnwtt3s4pq682gjgfz2ibxk2gp63y0zkdnx3j2lfztsx38av5ulkcmo75p7x4dk5dz8mz0n2y5d2m1ca2frrt0rk0nxz5oyamgoolhmco9nm1152kxidhp6slk8p1myqqkv3v2v28k2e775hw9jjm1llnrb8giyh7kpbndbglqvqb072v4lpjzdoqkmtrac5gb6s1hv6n04gwibiyab9hys44uliij1ns3i2xl5tm6b2n7wl9wd5nwp920me36l5ztjkokjbu34ady69cxunk8naplopti30uiibqqt0waljmwh9y48x6j3g1kcpibvryx7vme1kycw4x1aqasx3vlpcty35lkk8d734uqv1eduewcld3iocux59zh72qpf3b0jfwsiutwv29urlthwyzhu81hzi0gery84ywsaqu8ovp8hzzwv7p9vp55z2xs7wm7l4v25ot7ds8of5f2hkq6kl05z9pz5onbu8a1336k68awaljns6kvawwsavhiz4iyi8ex2x436olu92etmhpo7ba9wk0316lfr1720slzx5scnwuzoc9t6vmd294p586lg94mq0vl5ocvb7kps27a0d1ycy2dc58epmucezaknrprhl6cec3f26d3jrj9atxgkn6ho829ohxt27oxywot1zi11xm1o6qisvqw10w92iatnpg3f6rl1stgu74t4v9wig1chewiw2f7hvipc8oeh2b0vw7ccb60y86i2iabhu01vrntbtsxt27g6qwzzc3k5i9xgdwns83trhaa5aglcd2b4fbcq39bwcvldmp9ogokqu5m2ysxuw0hhvvrnliskefjjvsf3peb5utw1qz949trdozclyxnlgmjihd9aamzbwscd3bc1qjnk1gt05qddbo6ft1w2apema1xtgqvlxj6c3kx7x4cb7qlkglzv5khv1chnqbtb76p6pq7ha7l71bo5eu4bqwn7507078r9iw1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 7zun6ix1p9nuceeqvoqtxo5juz8qhnhvi7x7t0fnwtt3s4pq682gjgfz2ibxk2gp63y0zkdnx3j2lfztsx38av5ulkcmo75p7x4dk5dz8mz0n2y5d2m1ca2frrt0rk0nxz5oyamgoolhmco9nm1152kxidhp6slk8p1myqqkv3v2v28k2e775hw9jjm1llnrb8giyh7kpbndbglqvqb072v4lpjzdoqkmtrac5gb6s1hv6n04gwibiyab9hys44uliij1ns3i2xl5tm6b2n7wl9wd5nwp920me36l5ztjkokjbu34ady69cxunk8naplopti30uiibqqt0waljmwh9y48x6j3g1kcpibvryx7vme1kycw4x1aqasx3vlpcty35lkk8d734uqv1eduewcld3iocux59zh72qpf3b0jfwsiutwv29urlthwyzhu81hzi0gery84ywsaqu8ovp8hzzwv7p9vp55z2xs7wm7l4v25ot7ds8of5f2hkq6kl05z9pz5onbu8a1336k68awaljns6kvawwsavhiz4iyi8ex2x436olu92etmhpo7ba9wk0316lfr1720slzx5scnwuzoc9t6vmd294p586lg94mq0vl5ocvb7kps27a0d1ycy2dc58epmucezaknrprhl6cec3f26d3jrj9atxgkn6ho829ohxt27oxywot1zi11xm1o6qisvqw10w92iatnpg3f6rl1stgu74t4v9wig1chewiw2f7hvipc8oeh2b0vw7ccb60y86i2iabhu01vrntbtsxt27g6qwzzc3k5i9xgdwns83trhaa5aglcd2b4fbcq39bwcvldmp9ogokqu5m2ysxuw0hhvvrnliskefjjvsf3peb5utw1qz949trdozclyxnlgmjihd9aamzbwscd3bc1qjnk1gt05qddbo6ft1w2apema1xtgqvlxj6c3kx7x4cb7qlkglzv5khv1chnqbtb76p6pq7ha7l71bo5eu4bqwn7507078r9iw1 00:15:42.243 13:53:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:15:42.243 [2024-05-15 13:53:40.663208] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:42.243 [2024-05-15 13:53:40.663273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63278 ] 00:15:42.503 [2024-05-15 13:53:40.802871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.503 [2024-05-15 13:53:40.894308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.329  Copying: 511/511 [MB] (average 1418 MBps) 00:15:43.329 00:15:43.329 13:53:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:15:43.329 13:53:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:15:43.329 13:53:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:43.329 13:53:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:43.588 [2024-05-15 13:53:41.890023] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:43.588 [2024-05-15 13:53:41.890101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:15:43.588 { 00:15:43.588 "subsystems": [ 00:15:43.588 { 00:15:43.588 "subsystem": "bdev", 00:15:43.588 "config": [ 00:15:43.588 { 00:15:43.588 "params": { 00:15:43.588 "block_size": 512, 00:15:43.588 "num_blocks": 1048576, 00:15:43.588 "name": "malloc0" 00:15:43.588 }, 00:15:43.588 "method": "bdev_malloc_create" 00:15:43.588 }, 00:15:43.588 { 00:15:43.588 "params": { 00:15:43.588 "filename": "/dev/zram1", 00:15:43.588 "name": "uring0" 00:15:43.588 }, 00:15:43.588 "method": "bdev_uring_create" 00:15:43.588 }, 00:15:43.588 { 00:15:43.588 "method": "bdev_wait_for_examine" 00:15:43.588 } 00:15:43.588 ] 00:15:43.588 } 00:15:43.588 ] 00:15:43.588 } 00:15:43.588 [2024-05-15 13:53:42.033556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.588 [2024-05-15 13:53:42.137412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.164  Copying: 265/512 [MB] (265 MBps) Copying: 512/512 [MB] (average 266 MBps) 00:15:46.164 00:15:46.164 13:53:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:15:46.164 13:53:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:15:46.164 13:53:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:46.164 13:53:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:46.164 [2024-05-15 13:53:44.675667] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:46.164 [2024-05-15 13:53:44.676232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63333 ] 00:15:46.164 { 00:15:46.164 "subsystems": [ 00:15:46.164 { 00:15:46.164 "subsystem": "bdev", 00:15:46.164 "config": [ 00:15:46.164 { 00:15:46.164 "params": { 00:15:46.164 "block_size": 512, 00:15:46.164 "num_blocks": 1048576, 00:15:46.164 "name": "malloc0" 00:15:46.164 }, 00:15:46.164 "method": "bdev_malloc_create" 00:15:46.164 }, 00:15:46.164 { 00:15:46.164 "params": { 00:15:46.164 "filename": "/dev/zram1", 00:15:46.164 "name": "uring0" 00:15:46.164 }, 00:15:46.164 "method": "bdev_uring_create" 00:15:46.164 }, 00:15:46.164 { 00:15:46.164 "method": "bdev_wait_for_examine" 00:15:46.164 } 00:15:46.164 ] 00:15:46.164 } 00:15:46.164 ] 00:15:46.164 } 00:15:46.423 [2024-05-15 13:53:44.817782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.423 [2024-05-15 13:53:44.906204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.555  Copying: 215/512 [MB] (215 MBps) Copying: 429/512 [MB] (214 MBps) Copying: 512/512 [MB] (average 213 MBps) 00:15:49.555 00:15:49.555 13:53:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:15:49.555 13:53:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 7zun6ix1p9nuceeqvoqtxo5juz8qhnhvi7x7t0fnwtt3s4pq682gjgfz2ibxk2gp63y0zkdnx3j2lfztsx38av5ulkcmo75p7x4dk5dz8mz0n2y5d2m1ca2frrt0rk0nxz5oyamgoolhmco9nm1152kxidhp6slk8p1myqqkv3v2v28k2e775hw9jjm1llnrb8giyh7kpbndbglqvqb072v4lpjzdoqkmtrac5gb6s1hv6n04gwibiyab9hys44uliij1ns3i2xl5tm6b2n7wl9wd5nwp920me36l5ztjkokjbu34ady69cxunk8naplopti30uiibqqt0waljmwh9y48x6j3g1kcpibvryx7vme1kycw4x1aqasx3vlpcty35lkk8d734uqv1eduewcld3iocux59zh72qpf3b0jfwsiutwv29urlthwyzhu81hzi0gery84ywsaqu8ovp8hzzwv7p9vp55z2xs7wm7l4v25ot7ds8of5f2hkq6kl05z9pz5onbu8a1336k68awaljns6kvawwsavhiz4iyi8ex2x436olu92etmhpo7ba9wk0316lfr1720slzx5scnwuzoc9t6vmd294p586lg94mq0vl5ocvb7kps27a0d1ycy2dc58epmucezaknrprhl6cec3f26d3jrj9atxgkn6ho829ohxt27oxywot1zi11xm1o6qisvqw10w92iatnpg3f6rl1stgu74t4v9wig1chewiw2f7hvipc8oeh2b0vw7ccb60y86i2iabhu01vrntbtsxt27g6qwzzc3k5i9xgdwns83trhaa5aglcd2b4fbcq39bwcvldmp9ogokqu5m2ysxuw0hhvvrnliskefjjvsf3peb5utw1qz949trdozclyxnlgmjihd9aamzbwscd3bc1qjnk1gt05qddbo6ft1w2apema1xtgqvlxj6c3kx7x4cb7qlkglzv5khv1chnqbtb76p6pq7ha7l71bo5eu4bqwn7507078r9iw1 == \7\z\u\n\6\i\x\1\p\9\n\u\c\e\e\q\v\o\q\t\x\o\5\j\u\z\8\q\h\n\h\v\i\7\x\7\t\0\f\n\w\t\t\3\s\4\p\q\6\8\2\g\j\g\f\z\2\i\b\x\k\2\g\p\6\3\y\0\z\k\d\n\x\3\j\2\l\f\z\t\s\x\3\8\a\v\5\u\l\k\c\m\o\7\5\p\7\x\4\d\k\5\d\z\8\m\z\0\n\2\y\5\d\2\m\1\c\a\2\f\r\r\t\0\r\k\0\n\x\z\5\o\y\a\m\g\o\o\l\h\m\c\o\9\n\m\1\1\5\2\k\x\i\d\h\p\6\s\l\k\8\p\1\m\y\q\q\k\v\3\v\2\v\2\8\k\2\e\7\7\5\h\w\9\j\j\m\1\l\l\n\r\b\8\g\i\y\h\7\k\p\b\n\d\b\g\l\q\v\q\b\0\7\2\v\4\l\p\j\z\d\o\q\k\m\t\r\a\c\5\g\b\6\s\1\h\v\6\n\0\4\g\w\i\b\i\y\a\b\9\h\y\s\4\4\u\l\i\i\j\1\n\s\3\i\2\x\l\5\t\m\6\b\2\n\7\w\l\9\w\d\5\n\w\p\9\2\0\m\e\3\6\l\5\z\t\j\k\o\k\j\b\u\3\4\a\d\y\6\9\c\x\u\n\k\8\n\a\p\l\o\p\t\i\3\0\u\i\i\b\q\q\t\0\w\a\l\j\m\w\h\9\y\4\8\x\6\j\3\g\1\k\c\p\i\b\v\r\y\x\7\v\m\e\1\k\y\c\w\4\x\1\a\q\a\s\x\3\v\l\p\c\t\y\3\5\l\k\k\8\d\7\3\4\u\q\v\1\e\d\u\e\w\c\l\d\3\i\o\c\u\x\5\9\z\h\7\2\q\p\f\3\b\0\j\f\w\s\i\u\t\w\v\2\9\u\r\l\t\h\w\y\z\h\u\8\1\h\z\i\0\g\e\r\y\8\4\y\w\s\a\q\u\8\o\v\p\8\h\z\z\w\v\7\p\9\v\p\5\5\z\2\x\s\7\w\m\7\l\4\v\2\5\o\t\7\d\s\8\o\f\5\f\2\h\k\q\6\k\l\0\5\z\9\p\z\5\o\n\b\u\8\a\1\3\3\6\k\6\8\a\w\a\l\j\n\s\6\k\v\a\w\w\s\a\v\h\i\z\4\i\y\i\8\e\x\2\x\4\3\6\o\l\u\9\2\e\t\m\h\p\o\7\b\a\9\w\k\0\3\1\6\l\f\r\1\7\2\0\s\l\z\x\5\s\c\n\w\u\z\o\c\9\t\6\v\m\d\2\9\4\p\5\8\6\l\g\9\4\m\q\0\v\l\5\o\c\v\b\7\k\p\s\2\7\a\0\d\1\y\c\y\2\d\c\5\8\e\p\m\u\c\e\z\a\k\n\r\p\r\h\l\6\c\e\c\3\f\2\6\d\3\j\r\j\9\a\t\x\g\k\n\6\h\o\8\2\9\o\h\x\t\2\7\o\x\y\w\o\t\1\z\i\1\1\x\m\1\o\6\q\i\s\v\q\w\1\0\w\9\2\i\a\t\n\p\g\3\f\6\r\l\1\s\t\g\u\7\4\t\4\v\9\w\i\g\1\c\h\e\w\i\w\2\f\7\h\v\i\p\c\8\o\e\h\2\b\0\v\w\7\c\c\b\6\0\y\8\6\i\2\i\a\b\h\u\0\1\v\r\n\t\b\t\s\x\t\2\7\g\6\q\w\z\z\c\3\k\5\i\9\x\g\d\w\n\s\8\3\t\r\h\a\a\5\a\g\l\c\d\2\b\4\f\b\c\q\3\9\b\w\c\v\l\d\m\p\9\o\g\o\k\q\u\5\m\2\y\s\x\u\w\0\h\h\v\v\r\n\l\i\s\k\e\f\j\j\v\s\f\3\p\e\b\5\u\t\w\1\q\z\9\4\9\t\r\d\o\z\c\l\y\x\n\l\g\m\j\i\h\d\9\a\a\m\z\b\w\s\c\d\3\b\c\1\q\j\n\k\1\g\t\0\5\q\d\d\b\o\6\f\t\1\w\2\a\p\e\m\a\1\x\t\g\q\v\l\x\j\6\c\3\k\x\7\x\4\c\b\7\q\l\k\g\l\z\v\5\k\h\v\1\c\h\n\q\b\t\b\7\6\p\6\p\q\7\h\a\7\l\7\1\b\o\5\e\u\4\b\q\w\n\7\5\0\7\0\7\8\r\9\i\w\1 ]] 00:15:49.555 13:53:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:15:49.555 13:53:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 7zun6ix1p9nuceeqvoqtxo5juz8qhnhvi7x7t0fnwtt3s4pq682gjgfz2ibxk2gp63y0zkdnx3j2lfztsx38av5ulkcmo75p7x4dk5dz8mz0n2y5d2m1ca2frrt0rk0nxz5oyamgoolhmco9nm1152kxidhp6slk8p1myqqkv3v2v28k2e775hw9jjm1llnrb8giyh7kpbndbglqvqb072v4lpjzdoqkmtrac5gb6s1hv6n04gwibiyab9hys44uliij1ns3i2xl5tm6b2n7wl9wd5nwp920me36l5ztjkokjbu34ady69cxunk8naplopti30uiibqqt0waljmwh9y48x6j3g1kcpibvryx7vme1kycw4x1aqasx3vlpcty35lkk8d734uqv1eduewcld3iocux59zh72qpf3b0jfwsiutwv29urlthwyzhu81hzi0gery84ywsaqu8ovp8hzzwv7p9vp55z2xs7wm7l4v25ot7ds8of5f2hkq6kl05z9pz5onbu8a1336k68awaljns6kvawwsavhiz4iyi8ex2x436olu92etmhpo7ba9wk0316lfr1720slzx5scnwuzoc9t6vmd294p586lg94mq0vl5ocvb7kps27a0d1ycy2dc58epmucezaknrprhl6cec3f26d3jrj9atxgkn6ho829ohxt27oxywot1zi11xm1o6qisvqw10w92iatnpg3f6rl1stgu74t4v9wig1chewiw2f7hvipc8oeh2b0vw7ccb60y86i2iabhu01vrntbtsxt27g6qwzzc3k5i9xgdwns83trhaa5aglcd2b4fbcq39bwcvldmp9ogokqu5m2ysxuw0hhvvrnliskefjjvsf3peb5utw1qz949trdozclyxnlgmjihd9aamzbwscd3bc1qjnk1gt05qddbo6ft1w2apema1xtgqvlxj6c3kx7x4cb7qlkglzv5khv1chnqbtb76p6pq7ha7l71bo5eu4bqwn7507078r9iw1 == \7\z\u\n\6\i\x\1\p\9\n\u\c\e\e\q\v\o\q\t\x\o\5\j\u\z\8\q\h\n\h\v\i\7\x\7\t\0\f\n\w\t\t\3\s\4\p\q\6\8\2\g\j\g\f\z\2\i\b\x\k\2\g\p\6\3\y\0\z\k\d\n\x\3\j\2\l\f\z\t\s\x\3\8\a\v\5\u\l\k\c\m\o\7\5\p\7\x\4\d\k\5\d\z\8\m\z\0\n\2\y\5\d\2\m\1\c\a\2\f\r\r\t\0\r\k\0\n\x\z\5\o\y\a\m\g\o\o\l\h\m\c\o\9\n\m\1\1\5\2\k\x\i\d\h\p\6\s\l\k\8\p\1\m\y\q\q\k\v\3\v\2\v\2\8\k\2\e\7\7\5\h\w\9\j\j\m\1\l\l\n\r\b\8\g\i\y\h\7\k\p\b\n\d\b\g\l\q\v\q\b\0\7\2\v\4\l\p\j\z\d\o\q\k\m\t\r\a\c\5\g\b\6\s\1\h\v\6\n\0\4\g\w\i\b\i\y\a\b\9\h\y\s\4\4\u\l\i\i\j\1\n\s\3\i\2\x\l\5\t\m\6\b\2\n\7\w\l\9\w\d\5\n\w\p\9\2\0\m\e\3\6\l\5\z\t\j\k\o\k\j\b\u\3\4\a\d\y\6\9\c\x\u\n\k\8\n\a\p\l\o\p\t\i\3\0\u\i\i\b\q\q\t\0\w\a\l\j\m\w\h\9\y\4\8\x\6\j\3\g\1\k\c\p\i\b\v\r\y\x\7\v\m\e\1\k\y\c\w\4\x\1\a\q\a\s\x\3\v\l\p\c\t\y\3\5\l\k\k\8\d\7\3\4\u\q\v\1\e\d\u\e\w\c\l\d\3\i\o\c\u\x\5\9\z\h\7\2\q\p\f\3\b\0\j\f\w\s\i\u\t\w\v\2\9\u\r\l\t\h\w\y\z\h\u\8\1\h\z\i\0\g\e\r\y\8\4\y\w\s\a\q\u\8\o\v\p\8\h\z\z\w\v\7\p\9\v\p\5\5\z\2\x\s\7\w\m\7\l\4\v\2\5\o\t\7\d\s\8\o\f\5\f\2\h\k\q\6\k\l\0\5\z\9\p\z\5\o\n\b\u\8\a\1\3\3\6\k\6\8\a\w\a\l\j\n\s\6\k\v\a\w\w\s\a\v\h\i\z\4\i\y\i\8\e\x\2\x\4\3\6\o\l\u\9\2\e\t\m\h\p\o\7\b\a\9\w\k\0\3\1\6\l\f\r\1\7\2\0\s\l\z\x\5\s\c\n\w\u\z\o\c\9\t\6\v\m\d\2\9\4\p\5\8\6\l\g\9\4\m\q\0\v\l\5\o\c\v\b\7\k\p\s\2\7\a\0\d\1\y\c\y\2\d\c\5\8\e\p\m\u\c\e\z\a\k\n\r\p\r\h\l\6\c\e\c\3\f\2\6\d\3\j\r\j\9\a\t\x\g\k\n\6\h\o\8\2\9\o\h\x\t\2\7\o\x\y\w\o\t\1\z\i\1\1\x\m\1\o\6\q\i\s\v\q\w\1\0\w\9\2\i\a\t\n\p\g\3\f\6\r\l\1\s\t\g\u\7\4\t\4\v\9\w\i\g\1\c\h\e\w\i\w\2\f\7\h\v\i\p\c\8\o\e\h\2\b\0\v\w\7\c\c\b\6\0\y\8\6\i\2\i\a\b\h\u\0\1\v\r\n\t\b\t\s\x\t\2\7\g\6\q\w\z\z\c\3\k\5\i\9\x\g\d\w\n\s\8\3\t\r\h\a\a\5\a\g\l\c\d\2\b\4\f\b\c\q\3\9\b\w\c\v\l\d\m\p\9\o\g\o\k\q\u\5\m\2\y\s\x\u\w\0\h\h\v\v\r\n\l\i\s\k\e\f\j\j\v\s\f\3\p\e\b\5\u\t\w\1\q\z\9\4\9\t\r\d\o\z\c\l\y\x\n\l\g\m\j\i\h\d\9\a\a\m\z\b\w\s\c\d\3\b\c\1\q\j\n\k\1\g\t\0\5\q\d\d\b\o\6\f\t\1\w\2\a\p\e\m\a\1\x\t\g\q\v\l\x\j\6\c\3\k\x\7\x\4\c\b\7\q\l\k\g\l\z\v\5\k\h\v\1\c\h\n\q\b\t\b\7\6\p\6\p\q\7\h\a\7\l\7\1\b\o\5\e\u\4\b\q\w\n\7\5\0\7\0\7\8\r\9\i\w\1 ]] 00:15:49.555 13:53:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:49.816 13:53:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:15:49.816 13:53:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:15:49.816 13:53:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:49.816 13:53:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 [2024-05-15 13:53:48.302535] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:49.816 [2024-05-15 13:53:48.302598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63406 ] 00:15:49.816 { 00:15:49.816 "subsystems": [ 00:15:49.816 { 00:15:49.816 "subsystem": "bdev", 00:15:49.816 "config": [ 00:15:49.816 { 00:15:49.816 "params": { 00:15:49.816 "block_size": 512, 00:15:49.816 "num_blocks": 1048576, 00:15:49.816 "name": "malloc0" 00:15:49.816 }, 00:15:49.816 "method": "bdev_malloc_create" 00:15:49.816 }, 00:15:49.816 { 00:15:49.816 "params": { 00:15:49.816 "filename": "/dev/zram1", 00:15:49.816 "name": "uring0" 00:15:49.816 }, 00:15:49.816 "method": "bdev_uring_create" 00:15:49.816 }, 00:15:49.816 { 00:15:49.816 "method": "bdev_wait_for_examine" 00:15:49.816 } 00:15:49.816 ] 00:15:49.816 } 00:15:49.816 ] 00:15:49.816 } 00:15:50.075 [2024-05-15 13:53:48.441050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.075 [2024-05-15 13:53:48.543745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.277  Copying: 195/512 [MB] (195 MBps) Copying: 385/512 [MB] (190 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:15:53.277 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:53.277 13:53:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:53.277 [2024-05-15 13:53:51.809342] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:53.277 [2024-05-15 13:53:51.809418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63456 ] 00:15:53.277 { 00:15:53.277 "subsystems": [ 00:15:53.277 { 00:15:53.277 "subsystem": "bdev", 00:15:53.277 "config": [ 00:15:53.277 { 00:15:53.277 "params": { 00:15:53.277 "block_size": 512, 00:15:53.277 "num_blocks": 1048576, 00:15:53.277 "name": "malloc0" 00:15:53.277 }, 00:15:53.277 "method": "bdev_malloc_create" 00:15:53.277 }, 00:15:53.277 { 00:15:53.277 "params": { 00:15:53.277 "filename": "/dev/zram1", 00:15:53.277 "name": "uring0" 00:15:53.277 }, 00:15:53.277 "method": "bdev_uring_create" 00:15:53.277 }, 00:15:53.277 { 00:15:53.277 "params": { 00:15:53.277 "name": "uring0" 00:15:53.277 }, 00:15:53.277 "method": "bdev_uring_delete" 00:15:53.277 }, 00:15:53.277 { 00:15:53.277 "method": "bdev_wait_for_examine" 00:15:53.277 } 00:15:53.277 ] 00:15:53.277 } 00:15:53.277 ] 00:15:53.277 } 00:15:53.536 [2024-05-15 13:53:51.950338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.536 [2024-05-15 13:53:52.052837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.377  Copying: 0/0 [B] (average 0 Bps) 00:15:54.377 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:54.377 13:53:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:54.377 [2024-05-15 13:53:52.679024] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:54.377 [2024-05-15 13:53:52.679096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63486 ] 00:15:54.377 { 00:15:54.377 "subsystems": [ 00:15:54.377 { 00:15:54.377 "subsystem": "bdev", 00:15:54.377 "config": [ 00:15:54.377 { 00:15:54.377 "params": { 00:15:54.377 "block_size": 512, 00:15:54.377 "num_blocks": 1048576, 00:15:54.377 "name": "malloc0" 00:15:54.377 }, 00:15:54.377 "method": "bdev_malloc_create" 00:15:54.377 }, 00:15:54.377 { 00:15:54.377 "params": { 00:15:54.377 "filename": "/dev/zram1", 00:15:54.377 "name": "uring0" 00:15:54.377 }, 00:15:54.377 "method": "bdev_uring_create" 00:15:54.377 }, 00:15:54.377 { 00:15:54.377 "params": { 00:15:54.377 "name": "uring0" 00:15:54.377 }, 00:15:54.377 "method": "bdev_uring_delete" 00:15:54.377 }, 00:15:54.377 { 00:15:54.377 "method": "bdev_wait_for_examine" 00:15:54.377 } 00:15:54.377 ] 00:15:54.377 } 00:15:54.377 ] 00:15:54.377 } 00:15:54.635 [2024-05-15 13:53:52.954491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.635 [2024-05-15 13:53:53.053847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.893 [2024-05-15 13:53:53.259193] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:15:54.893 [2024-05-15 13:53:53.259233] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:15:54.893 [2024-05-15 13:53:53.259242] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:15:54.893 [2024-05-15 13:53:53.259252] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:55.151 [2024-05-15 13:53:53.508836] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:15:55.151 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:55.409 00:15:55.409 real 0m13.295s 00:15:55.409 user 0m8.818s 00:15:55.409 sys 0m10.548s 00:15:55.409 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.409 13:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:55.409 ************************************ 00:15:55.409 END TEST dd_uring_copy 00:15:55.409 ************************************ 00:15:55.409 00:15:55.409 real 0m13.492s 00:15:55.409 user 0m8.896s 00:15:55.409 sys 0m10.675s 00:15:55.409 13:53:53 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.409 13:53:53 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:15:55.409 ************************************ 00:15:55.409 END TEST spdk_dd_uring 00:15:55.409 ************************************ 00:15:55.668 13:53:53 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:15:55.668 13:53:53 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:55.668 13:53:53 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.668 13:53:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:55.668 ************************************ 00:15:55.668 START TEST spdk_dd_sparse 00:15:55.668 ************************************ 00:15:55.668 13:53:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:15:55.668 * Looking for test storage... 00:15:55.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.668 13:53:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:15:55.669 1+0 records in 00:15:55.669 1+0 records out 00:15:55.669 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00524334 s, 800 MB/s 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:15:55.669 1+0 records in 00:15:55.669 1+0 records out 00:15:55.669 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00887648 s, 473 MB/s 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:15:55.669 1+0 records in 00:15:55.669 1+0 records out 00:15:55.669 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00791771 s, 530 MB/s 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 ************************************ 00:15:55.669 START TEST dd_sparse_file_to_file 00:15:55.669 ************************************ 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:15:55.669 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:15:55.670 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:15:55.670 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:55.929 [2024-05-15 13:53:54.240354] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:55.929 [2024-05-15 13:53:54.240418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63573 ] 00:15:55.929 { 00:15:55.929 "subsystems": [ 00:15:55.929 { 00:15:55.929 "subsystem": "bdev", 00:15:55.929 "config": [ 00:15:55.929 { 00:15:55.929 "params": { 00:15:55.929 "block_size": 4096, 00:15:55.929 "filename": "dd_sparse_aio_disk", 00:15:55.929 "name": "dd_aio" 00:15:55.929 }, 00:15:55.929 "method": "bdev_aio_create" 00:15:55.929 }, 00:15:55.929 { 00:15:55.929 "params": { 00:15:55.929 "lvs_name": "dd_lvstore", 00:15:55.929 "bdev_name": "dd_aio" 00:15:55.929 }, 00:15:55.929 "method": "bdev_lvol_create_lvstore" 00:15:55.929 }, 00:15:55.929 { 00:15:55.929 "method": "bdev_wait_for_examine" 00:15:55.929 } 00:15:55.929 ] 00:15:55.929 } 00:15:55.929 ] 00:15:55.929 } 00:15:55.929 [2024-05-15 13:53:54.375183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.929 [2024-05-15 13:53:54.470375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.446  Copying: 12/36 [MB] (average 800 MBps) 00:15:56.446 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:15:56.446 00:15:56.446 real 0m0.664s 00:15:56.446 user 0m0.431s 00:15:56.446 sys 0m0.311s 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:56.446 ************************************ 00:15:56.446 END TEST dd_sparse_file_to_file 00:15:56.446 ************************************ 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:56.446 ************************************ 00:15:56.446 START TEST dd_sparse_file_to_bdev 00:15:56.446 ************************************ 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:56.446 13:53:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:56.446 [2024-05-15 13:53:54.961188] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:56.446 [2024-05-15 13:53:54.961254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63621 ] 00:15:56.447 { 00:15:56.447 "subsystems": [ 00:15:56.447 { 00:15:56.447 "subsystem": "bdev", 00:15:56.447 "config": [ 00:15:56.447 { 00:15:56.447 "params": { 00:15:56.447 "block_size": 4096, 00:15:56.447 "filename": "dd_sparse_aio_disk", 00:15:56.447 "name": "dd_aio" 00:15:56.447 }, 00:15:56.447 "method": "bdev_aio_create" 00:15:56.447 }, 00:15:56.447 { 00:15:56.447 "params": { 00:15:56.447 "lvs_name": "dd_lvstore", 00:15:56.447 "lvol_name": "dd_lvol", 00:15:56.447 "size_in_mib": 36, 00:15:56.447 "thin_provision": true 00:15:56.447 }, 00:15:56.447 "method": "bdev_lvol_create" 00:15:56.447 }, 00:15:56.447 { 00:15:56.447 "method": "bdev_wait_for_examine" 00:15:56.447 } 00:15:56.447 ] 00:15:56.447 } 00:15:56.447 ] 00:15:56.447 } 00:15:56.707 [2024-05-15 13:53:55.087702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.707 [2024-05-15 13:53:55.206485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.225  Copying: 12/36 [MB] (average 428 MBps) 00:15:57.225 00:15:57.225 00:15:57.225 real 0m0.645s 00:15:57.225 user 0m0.425s 00:15:57.225 sys 0m0.308s 00:15:57.225 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:57.225 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:57.225 ************************************ 00:15:57.225 END TEST dd_sparse_file_to_bdev 00:15:57.225 ************************************ 00:15:57.225 13:53:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:15:57.225 13:53:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:57.225 13:53:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:57.226 ************************************ 00:15:57.226 START TEST dd_sparse_bdev_to_file 00:15:57.226 ************************************ 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:15:57.226 13:53:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:57.226 [2024-05-15 13:53:55.665031] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:57.226 [2024-05-15 13:53:55.665123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63659 ] 00:15:57.226 { 00:15:57.226 "subsystems": [ 00:15:57.226 { 00:15:57.226 "subsystem": "bdev", 00:15:57.226 "config": [ 00:15:57.226 { 00:15:57.226 "params": { 00:15:57.226 "block_size": 4096, 00:15:57.226 "filename": "dd_sparse_aio_disk", 00:15:57.226 "name": "dd_aio" 00:15:57.226 }, 00:15:57.226 "method": "bdev_aio_create" 00:15:57.226 }, 00:15:57.226 { 00:15:57.226 "method": "bdev_wait_for_examine" 00:15:57.226 } 00:15:57.226 ] 00:15:57.226 } 00:15:57.226 ] 00:15:57.226 } 00:15:57.485 [2024-05-15 13:53:55.807298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.485 [2024-05-15 13:53:55.902937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.744  Copying: 12/36 [MB] (average 750 MBps) 00:15:57.744 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:15:57.744 ************************************ 00:15:57.744 END TEST dd_sparse_bdev_to_file 00:15:57.744 ************************************ 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:15:57.744 00:15:57.744 real 0m0.647s 00:15:57.744 user 0m0.416s 00:15:57.744 sys 0m0.312s 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:57.744 13:53:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:15:58.003 ************************************ 00:15:58.003 END TEST spdk_dd_sparse 00:15:58.003 ************************************ 00:15:58.003 00:15:58.003 real 0m2.371s 00:15:58.003 user 0m1.408s 00:15:58.003 sys 0m1.201s 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.003 13:53:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:58.003 13:53:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:15:58.003 13:53:56 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.003 13:53:56 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.003 13:53:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:58.003 ************************************ 00:15:58.003 START TEST spdk_dd_negative 00:15:58.003 ************************************ 00:15:58.003 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:15:58.003 * Looking for test storage... 00:15:58.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:58.003 13:53:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.262 13:53:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.262 13:53:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.262 13:53:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.263 ************************************ 00:15:58.263 START TEST dd_invalid_arguments 00:15:58.263 ************************************ 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:58.263 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:15:58.263 00:15:58.263 CPU options: 00:15:58.263 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:58.263 (like [0,1,10]) 00:15:58.263 --lcores lcore to CPU mapping list. The list is in the format: 00:15:58.263 [<,lcores[@CPUs]>...] 00:15:58.263 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:58.263 Within the group, '-' is used for range separator, 00:15:58.263 ',' is used for single number separator. 00:15:58.263 '( )' can be omitted for single element group, 00:15:58.263 '@' can be omitted if cpus and lcores have the same value 00:15:58.263 --disable-cpumask-locks Disable CPU core lock files. 00:15:58.263 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:58.263 pollers in the app support interrupt mode) 00:15:58.263 -p, --main-core main (primary) core for DPDK 00:15:58.263 00:15:58.263 Configuration options: 00:15:58.263 -c, --config, --json JSON config file 00:15:58.263 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:58.263 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:58.263 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:58.263 --rpcs-allowed comma-separated list of permitted RPCS 00:15:58.263 --json-ignore-init-errors don't exit on invalid config entry 00:15:58.263 00:15:58.263 Memory options: 00:15:58.263 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:58.263 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:58.263 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:58.263 -R, --huge-unlink unlink huge files after initialization 00:15:58.263 -n, --mem-channels number of memory channels used for DPDK 00:15:58.263 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:15:58.263 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:58.263 --no-huge run without using hugepages 00:15:58.263 -i, --shm-id shared memory ID (optional) 00:15:58.263 -g, --single-file-segments force creating just one hugetlbfs file 00:15:58.263 00:15:58.263 PCI options: 00:15:58.263 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:58.263 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:58.263 -u, --no-pci disable PCI access 00:15:58.263 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:58.263 00:15:58.263 Log options: 00:15:58.263 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:15:58.263 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:15:58.263 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:15:58.263 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:15:58.263 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:15:58.263 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:15:58.263 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:15:58.263 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:15:58.263 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:15:58.263 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:15:58.263 virtio_vfio_user, vmd) 00:15:58.263 --silence-noticelog disable notice level logging to stderr 00:15:58.263 00:15:58.263 Trace options: 00:15:58.263 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:58.263 setting 0 to disable trace (default 32768) 00:15:58.263 Tracepoints vary in size and can use more than one trace entry. 00:15:58.263 -e, --tpoint-group [:] 00:15:58.263 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:15:58.263 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:15:58.263 [2024-05-15 13:53:56.648834] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:15:58.263 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:15:58.263 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:58.263 a tracepoint group. First tpoint inside a group can be enabled by 00:15:58.263 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:58.263 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:58.263 in /include/spdk_internal/trace_defs.h 00:15:58.263 00:15:58.263 Other options: 00:15:58.263 -h, --help show this usage 00:15:58.263 -v, --version print SPDK version 00:15:58.263 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:58.263 --env-context Opaque context for use of the env implementation 00:15:58.263 00:15:58.263 Application specific: 00:15:58.263 [--------- DD Options ---------] 00:15:58.263 --if Input file. Must specify either --if or --ib. 00:15:58.263 --ib Input bdev. Must specifier either --if or --ib 00:15:58.263 --of Output file. Must specify either --of or --ob. 00:15:58.263 --ob Output bdev. Must specify either --of or --ob. 00:15:58.263 --iflag Input file flags. 00:15:58.263 --oflag Output file flags. 00:15:58.263 --bs I/O unit size (default: 4096) 00:15:58.263 --qd Queue depth (default: 2) 00:15:58.263 --count I/O unit count. The number of I/O units to copy. (default: all) 00:15:58.263 --skip Skip this many I/O units at start of input. (default: 0) 00:15:58.263 --seek Skip this many I/O units at start of output. (default: 0) 00:15:58.263 --aio Force usage of AIO. (by default io_uring is used if available) 00:15:58.263 --sparse Enable hole skipping in input target 00:15:58.263 Available iflag and oflag values: 00:15:58.263 append - append mode 00:15:58.263 direct - use direct I/O for data 00:15:58.263 directory - fail unless a directory 00:15:58.263 dsync - use synchronized I/O for data 00:15:58.263 noatime - do not update access time 00:15:58.263 noctty - do not assign controlling terminal from file 00:15:58.263 nofollow - do not follow symlinks 00:15:58.263 nonblock - use non-blocking I/O 00:15:58.263 sync - use synchronized I/O for data and metadata 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.263 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.264 00:15:58.264 real 0m0.070s 00:15:58.264 user 0m0.034s 00:15:58.264 sys 0m0.033s 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:15:58.264 ************************************ 00:15:58.264 END TEST dd_invalid_arguments 00:15:58.264 ************************************ 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.264 ************************************ 00:15:58.264 START TEST dd_double_input 00:15:58.264 ************************************ 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:58.264 [2024-05-15 13:53:56.774530] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:15:58.264 ************************************ 00:15:58.264 END TEST dd_double_input 00:15:58.264 ************************************ 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.264 00:15:58.264 real 0m0.058s 00:15:58.264 user 0m0.036s 00:15:58.264 sys 0m0.020s 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.264 13:53:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 ************************************ 00:15:58.524 START TEST dd_double_output 00:15:58.524 ************************************ 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:58.524 [2024-05-15 13:53:56.903915] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.524 00:15:58.524 real 0m0.065s 00:15:58.524 user 0m0.037s 00:15:58.524 sys 0m0.028s 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 ************************************ 00:15:58.524 END TEST dd_double_output 00:15:58.524 ************************************ 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 ************************************ 00:15:58.524 START TEST dd_no_input 00:15:58.524 ************************************ 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.524 13:53:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:58.524 [2024-05-15 13:53:57.034777] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.524 00:15:58.524 real 0m0.066s 00:15:58.524 user 0m0.032s 00:15:58.524 sys 0m0.033s 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.524 13:53:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:15:58.524 ************************************ 00:15:58.524 END TEST dd_no_input 00:15:58.524 ************************************ 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.784 ************************************ 00:15:58.784 START TEST dd_no_output 00:15:58.784 ************************************ 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:58.784 [2024-05-15 13:53:57.169797] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.784 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.785 00:15:58.785 real 0m0.070s 00:15:58.785 user 0m0.037s 00:15:58.785 sys 0m0.031s 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.785 ************************************ 00:15:58.785 END TEST dd_no_output 00:15:58.785 ************************************ 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:58.785 ************************************ 00:15:58.785 START TEST dd_wrong_blocksize 00:15:58.785 ************************************ 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:58.785 [2024-05-15 13:53:57.309358] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.785 00:15:58.785 real 0m0.069s 00:15:58.785 user 0m0.038s 00:15:58.785 sys 0m0.031s 00:15:58.785 ************************************ 00:15:58.785 END TEST dd_wrong_blocksize 00:15:58.785 ************************************ 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.785 13:53:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:59.044 ************************************ 00:15:59.044 START TEST dd_smaller_blocksize 00:15:59.044 ************************************ 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:59.044 13:53:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:59.044 [2024-05-15 13:53:57.443397] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:15:59.044 [2024-05-15 13:53:57.443472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63872 ] 00:15:59.044 [2024-05-15 13:53:57.584628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.303 [2024-05-15 13:53:57.684558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.562 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:15:59.562 [2024-05-15 13:53:58.031127] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:15:59.562 [2024-05-15 13:53:58.031185] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:59.820 [2024-05-15 13:53:58.125921] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.820 ************************************ 00:15:59.820 END TEST dd_smaller_blocksize 00:15:59.820 ************************************ 00:15:59.820 00:15:59.820 real 0m0.850s 00:15:59.820 user 0m0.391s 00:15:59.820 sys 0m0.352s 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:59.820 ************************************ 00:15:59.820 START TEST dd_invalid_count 00:15:59.820 ************************************ 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:59.820 [2024-05-15 13:53:58.351587] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.820 00:15:59.820 real 0m0.066s 00:15:59.820 user 0m0.037s 00:15:59.820 sys 0m0.029s 00:15:59.820 ************************************ 00:15:59.820 END TEST dd_invalid_count 00:15:59.820 ************************************ 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:59.820 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:00.102 ************************************ 00:16:00.102 START TEST dd_invalid_oflag 00:16:00.102 ************************************ 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:16:00.102 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:00.103 [2024-05-15 13:53:58.482190] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.103 00:16:00.103 real 0m0.069s 00:16:00.103 user 0m0.042s 00:16:00.103 sys 0m0.027s 00:16:00.103 ************************************ 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:16:00.103 END TEST dd_invalid_oflag 00:16:00.103 ************************************ 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:00.103 ************************************ 00:16:00.103 START TEST dd_invalid_iflag 00:16:00.103 ************************************ 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:00.103 [2024-05-15 13:53:58.613462] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:16:00.103 ************************************ 00:16:00.103 END TEST dd_invalid_iflag 00:16:00.103 ************************************ 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.103 00:16:00.103 real 0m0.066s 00:16:00.103 user 0m0.032s 00:16:00.103 sys 0m0.033s 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.103 13:53:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:00.372 ************************************ 00:16:00.372 START TEST dd_unknown_flag 00:16:00.372 ************************************ 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:00.372 13:53:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:00.372 [2024-05-15 13:53:58.748516] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:00.372 [2024-05-15 13:53:58.748593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63974 ] 00:16:00.373 [2024-05-15 13:53:58.889309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.632 [2024-05-15 13:53:58.987144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.632 [2024-05-15 13:53:59.055512] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:16:00.632 [2024-05-15 13:53:59.055565] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:00.632 [2024-05-15 13:53:59.055615] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:16:00.632 [2024-05-15 13:53:59.055625] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:00.632 [2024-05-15 13:53:59.055838] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:16:00.632 [2024-05-15 13:53:59.055852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:00.632 [2024-05-15 13:53:59.055896] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:16:00.632 [2024-05-15 13:53:59.055904] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:16:00.632 [2024-05-15 13:53:59.148028] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:16:00.891 ************************************ 00:16:00.891 END TEST dd_unknown_flag 00:16:00.891 ************************************ 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.891 00:16:00.891 real 0m0.576s 00:16:00.891 user 0m0.334s 00:16:00.891 sys 0m0.145s 00:16:00.891 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:00.892 ************************************ 00:16:00.892 START TEST dd_invalid_json 00:16:00.892 ************************************ 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:00.892 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:00.892 [2024-05-15 13:53:59.387981] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:00.892 [2024-05-15 13:53:59.388047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63998 ] 00:16:01.152 [2024-05-15 13:53:59.529126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.152 [2024-05-15 13:53:59.632314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.152 [2024-05-15 13:53:59.632377] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:16:01.152 [2024-05-15 13:53:59.632391] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:01.152 [2024-05-15 13:53:59.632399] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:01.152 [2024-05-15 13:53:59.632429] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:01.412 00:16:01.412 real 0m0.414s 00:16:01.412 user 0m0.246s 00:16:01.412 sys 0m0.066s 00:16:01.412 ************************************ 00:16:01.412 END TEST dd_invalid_json 00:16:01.412 ************************************ 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:16:01.412 00:16:01.412 real 0m3.378s 00:16:01.412 user 0m1.616s 00:16:01.412 sys 0m1.436s 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.412 ************************************ 00:16:01.412 END TEST spdk_dd_negative 00:16:01.412 ************************************ 00:16:01.412 13:53:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:01.412 ************************************ 00:16:01.412 END TEST spdk_dd 00:16:01.412 ************************************ 00:16:01.412 00:16:01.412 real 1m12.124s 00:16:01.412 user 0m46.081s 00:16:01.412 sys 0m29.944s 00:16:01.412 13:53:59 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.412 13:53:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:01.412 13:53:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:01.412 13:53:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:01.412 13:53:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:01.412 13:53:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.412 13:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.671 13:53:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:01.671 13:53:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:16:01.671 13:53:59 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:16:01.671 13:53:59 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:16:01.671 13:53:59 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:16:01.671 13:53:59 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:16:01.671 13:53:59 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:01.671 13:53:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:01.671 13:53:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.671 13:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.671 ************************************ 00:16:01.671 START TEST nvmf_tcp 00:16:01.671 ************************************ 00:16:01.671 13:53:59 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:01.671 * Looking for test storage... 00:16:01.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.671 13:54:00 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.671 13:54:00 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.671 13:54:00 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.671 13:54:00 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.671 13:54:00 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.671 13:54:00 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.671 13:54:00 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:16:01.671 13:54:00 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:16:01.671 13:54:00 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.671 13:54:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:16:01.671 13:54:00 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:01.671 13:54:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:01.671 13:54:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.671 13:54:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.671 ************************************ 00:16:01.671 START TEST nvmf_host_management 00:16:01.671 ************************************ 00:16:01.671 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:01.930 * Looking for test storage... 00:16:01.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.930 13:54:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:01.931 Cannot find device "nvmf_init_br" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:01.931 Cannot find device "nvmf_tgt_br" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.931 Cannot find device "nvmf_tgt_br2" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:01.931 Cannot find device "nvmf_init_br" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:01.931 Cannot find device "nvmf_tgt_br" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:01.931 Cannot find device "nvmf_tgt_br2" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:01.931 Cannot find device "nvmf_br" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:01.931 Cannot find device "nvmf_init_if" 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.931 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.190 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:02.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:02.449 00:16:02.449 --- 10.0.0.2 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:02.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:16:02.449 00:16:02.449 --- 10.0.0.3 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:02.449 00:16:02.449 --- 10.0.0.1 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:02.449 13:54:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64266 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64266 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 64266 ']' 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:02.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:02.450 13:54:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.450 [2024-05-15 13:54:00.894186] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:02.450 [2024-05-15 13:54:00.894260] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.707 [2024-05-15 13:54:01.036901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.707 [2024-05-15 13:54:01.129378] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.707 [2024-05-15 13:54:01.129430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.707 [2024-05-15 13:54:01.129440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.707 [2024-05-15 13:54:01.129448] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.707 [2024-05-15 13:54:01.129455] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.707 [2024-05-15 13:54:01.129656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.707 [2024-05-15 13:54:01.130556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.708 [2024-05-15 13:54:01.130687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:02.708 [2024-05-15 13:54:01.130689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.274 [2024-05-15 13:54:01.785261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.274 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.533 Malloc0 00:16:03.533 [2024-05-15 13:54:01.863530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:03.533 [2024-05-15 13:54:01.863766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64321 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64321 /var/tmp/bdevperf.sock 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 64321 ']' 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:03.533 { 00:16:03.533 "params": { 00:16:03.533 "name": "Nvme$subsystem", 00:16:03.533 "trtype": "$TEST_TRANSPORT", 00:16:03.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.533 "adrfam": "ipv4", 00:16:03.533 "trsvcid": "$NVMF_PORT", 00:16:03.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.533 "hdgst": ${hdgst:-false}, 00:16:03.533 "ddgst": ${ddgst:-false} 00:16:03.533 }, 00:16:03.533 "method": "bdev_nvme_attach_controller" 00:16:03.533 } 00:16:03.533 EOF 00:16:03.533 )") 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:03.533 13:54:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:03.533 "params": { 00:16:03.533 "name": "Nvme0", 00:16:03.533 "trtype": "tcp", 00:16:03.533 "traddr": "10.0.0.2", 00:16:03.533 "adrfam": "ipv4", 00:16:03.533 "trsvcid": "4420", 00:16:03.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:03.533 "hdgst": false, 00:16:03.533 "ddgst": false 00:16:03.533 }, 00:16:03.533 "method": "bdev_nvme_attach_controller" 00:16:03.533 }' 00:16:03.533 [2024-05-15 13:54:01.984951] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:03.533 [2024-05-15 13:54:01.985025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64321 ] 00:16:03.792 [2024-05-15 13:54:02.126105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.792 [2024-05-15 13:54:02.217301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.051 Running I/O for 10 seconds... 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:04.572 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.573 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.573 [2024-05-15 13:54:02.916277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.916981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.916990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.573 [2024-05-15 13:54:02.917471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.573 [2024-05-15 13:54:02.917479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:04.574 [2024-05-15 13:54:02.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc79a0 is same with the state(5) to be set 00:16:04.574 [2024-05-15 13:54:02.917717] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdc79a0 was disconnected and freed. reset controller. 00:16:04.574 [2024-05-15 13:54:02.917837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.574 [2024-05-15 13:54:02.917856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.574 [2024-05-15 13:54:02.917882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.574 [2024-05-15 13:54:02.917906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.574 [2024-05-15 13:54:02.917930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.574 [2024-05-15 13:54:02.917942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8070 is same with the state(5) to be set 00:16:04.574 [2024-05-15 13:54:02.918908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:04.574 task offset: 8192 on job bdev=Nvme0n1 fails 00:16:04.574 00:16:04.574 Latency(us) 00:16:04.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:04.574 Job: Nvme0n1 ended in about 0.54 seconds with error 00:16:04.574 Verification LBA range: start 0x0 length 0x400 00:16:04.574 Nvme0n1 : 0.54 2001.97 125.12 117.76 0.00 29513.50 2066.09 30951.94 00:16:04.574 =================================================================================================================== 00:16:04.574 Total : 2001.97 125.12 117.76 0.00 29513.50 2066.09 30951.94 00:16:04.574 [2024-05-15 13:54:02.920956] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.574 [2024-05-15 13:54:02.920989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8070 (9): Bad file descriptor 00:16:04.574 [2024-05-15 13:54:02.923960] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:04.574 13:54:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.574 13:54:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64321 00:16:05.510 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64321) - No such process 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:05.510 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:05.510 { 00:16:05.510 "params": { 00:16:05.510 "name": "Nvme$subsystem", 00:16:05.510 "trtype": "$TEST_TRANSPORT", 00:16:05.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.510 "adrfam": "ipv4", 00:16:05.510 "trsvcid": "$NVMF_PORT", 00:16:05.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.511 "hdgst": ${hdgst:-false}, 00:16:05.511 "ddgst": ${ddgst:-false} 00:16:05.511 }, 00:16:05.511 "method": "bdev_nvme_attach_controller" 00:16:05.511 } 00:16:05.511 EOF 00:16:05.511 )") 00:16:05.511 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:05.511 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:05.511 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:05.511 13:54:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:05.511 "params": { 00:16:05.511 "name": "Nvme0", 00:16:05.511 "trtype": "tcp", 00:16:05.511 "traddr": "10.0.0.2", 00:16:05.511 "adrfam": "ipv4", 00:16:05.511 "trsvcid": "4420", 00:16:05.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:05.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:05.511 "hdgst": false, 00:16:05.511 "ddgst": false 00:16:05.511 }, 00:16:05.511 "method": "bdev_nvme_attach_controller" 00:16:05.511 }' 00:16:05.511 [2024-05-15 13:54:03.996218] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:05.511 [2024-05-15 13:54:03.996783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64354 ] 00:16:05.770 [2024-05-15 13:54:04.137551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.770 [2024-05-15 13:54:04.237590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.028 Running I/O for 1 seconds... 00:16:06.962 00:16:06.962 Latency(us) 00:16:06.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.962 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:06.962 Verification LBA range: start 0x0 length 0x400 00:16:06.962 Nvme0n1 : 1.03 1989.31 124.33 0.00 0.00 31589.89 3342.60 32215.29 00:16:06.962 =================================================================================================================== 00:16:06.962 Total : 1989.31 124.33 0.00 0.00 31589.89 3342.60 32215.29 00:16:07.220 13:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:07.220 13:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:07.220 13:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.221 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.221 rmmod nvme_tcp 00:16:07.480 rmmod nvme_fabrics 00:16:07.480 rmmod nvme_keyring 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64266 ']' 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64266 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 64266 ']' 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 64266 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64266 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:07.480 killing process with pid 64266 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64266' 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 64266 00:16:07.480 [2024-05-15 13:54:05.888844] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:07.480 13:54:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 64266 00:16:07.739 [2024-05-15 13:54:06.097729] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:07.739 00:16:07.739 real 0m6.008s 00:16:07.739 user 0m22.173s 00:16:07.739 sys 0m1.726s 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.739 ************************************ 00:16:07.739 END TEST nvmf_host_management 00:16:07.739 ************************************ 00:16:07.739 13:54:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.739 13:54:06 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:07.739 13:54:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:07.739 13:54:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.739 13:54:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.739 ************************************ 00:16:07.739 START TEST nvmf_lvol 00:16:07.739 ************************************ 00:16:07.739 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:07.999 * Looking for test storage... 00:16:07.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:07.999 Cannot find device "nvmf_tgt_br" 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.999 Cannot find device "nvmf_tgt_br2" 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:07.999 Cannot find device "nvmf_tgt_br" 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:07.999 Cannot find device "nvmf_tgt_br2" 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:16:07.999 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.258 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:08.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:08.518 00:16:08.518 --- 10.0.0.2 ping statistics --- 00:16:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.518 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:08.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:16:08.518 00:16:08.518 --- 10.0.0.3 ping statistics --- 00:16:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.518 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:08.518 00:16:08.518 --- 10.0.0.1 ping statistics --- 00:16:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.518 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:08.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64572 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64572 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 64572 ']' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:08.518 13:54:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:08.518 [2024-05-15 13:54:06.944501] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:08.518 [2024-05-15 13:54:06.944574] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.777 [2024-05-15 13:54:07.086878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:08.777 [2024-05-15 13:54:07.176192] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.777 [2024-05-15 13:54:07.176238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.777 [2024-05-15 13:54:07.176248] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.777 [2024-05-15 13:54:07.176256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.777 [2024-05-15 13:54:07.176263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.777 [2024-05-15 13:54:07.176453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.777 [2024-05-15 13:54:07.176671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.777 [2024-05-15 13:54:07.176672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.345 13:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.604 [2024-05-15 13:54:08.025644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.604 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.864 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:09.864 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.122 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:10.122 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:10.122 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:10.382 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=efe5dc6c-01d1-4ed5-9397-45dda6645283 00:16:10.382 13:54:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u efe5dc6c-01d1-4ed5-9397-45dda6645283 lvol 20 00:16:10.640 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4b534796-cd5f-4271-9a86-6a868da5e301 00:16:10.640 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:10.897 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b534796-cd5f-4271-9a86-6a868da5e301 00:16:10.897 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:11.155 [2024-05-15 13:54:09.630135] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:11.155 [2024-05-15 13:54:09.630432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.155 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:11.414 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64642 00:16:11.414 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:11.414 13:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:12.351 13:54:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4b534796-cd5f-4271-9a86-6a868da5e301 MY_SNAPSHOT 00:16:12.610 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5ee6d8d1-9c3c-4bb3-a966-45e82af46b43 00:16:12.610 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4b534796-cd5f-4271-9a86-6a868da5e301 30 00:16:12.868 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5ee6d8d1-9c3c-4bb3-a966-45e82af46b43 MY_CLONE 00:16:13.145 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e1152774-833e-4437-994f-3d8b77127f78 00:16:13.145 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e1152774-833e-4437-994f-3d8b77127f78 00:16:13.419 13:54:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64642 00:16:23.397 Initializing NVMe Controllers 00:16:23.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:23.397 Controller IO queue size 128, less than required. 00:16:23.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:23.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:23.397 Initialization complete. Launching workers. 00:16:23.397 ======================================================== 00:16:23.397 Latency(us) 00:16:23.397 Device Information : IOPS MiB/s Average min max 00:16:23.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11196.90 43.74 11435.92 1659.07 50879.83 00:16:23.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11369.30 44.41 11262.21 4455.51 52593.46 00:16:23.397 ======================================================== 00:16:23.397 Total : 22566.20 88.15 11348.40 1659.07 52593.46 00:16:23.397 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4b534796-cd5f-4271-9a86-6a868da5e301 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efe5dc6c-01d1-4ed5-9397-45dda6645283 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.397 rmmod nvme_tcp 00:16:23.397 rmmod nvme_fabrics 00:16:23.397 rmmod nvme_keyring 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64572 ']' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64572 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 64572 ']' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 64572 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64572 00:16:23.397 killing process with pid 64572 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64572' 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 64572 00:16:23.397 [2024-05-15 13:54:20.902851] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:23.397 13:54:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 64572 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.397 ************************************ 00:16:23.397 END TEST nvmf_lvol 00:16:23.397 ************************************ 00:16:23.397 00:16:23.397 real 0m14.949s 00:16:23.397 user 0m59.995s 00:16:23.397 sys 0m5.902s 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:23.397 13:54:21 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:23.397 13:54:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:23.397 13:54:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.397 13:54:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.397 ************************************ 00:16:23.397 START TEST nvmf_lvs_grow 00:16:23.397 ************************************ 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:23.397 * Looking for test storage... 00:16:23.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.397 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.398 Cannot find device "nvmf_tgt_br" 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.398 Cannot find device "nvmf_tgt_br2" 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.398 Cannot find device "nvmf_tgt_br" 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.398 Cannot find device "nvmf_tgt_br2" 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:16:23.398 00:16:23.398 --- 10.0.0.2 ping statistics --- 00:16:23.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.398 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:23.398 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:23.399 00:16:23.399 --- 10.0.0.3 ping statistics --- 00:16:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.399 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:23.399 00:16:23.399 --- 10.0.0.1 ping statistics --- 00:16:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.399 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64960 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64960 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 64960 ']' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:23.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:23.399 13:54:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.399 [2024-05-15 13:54:21.930071] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:23.399 [2024-05-15 13:54:21.930152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.657 [2024-05-15 13:54:22.074331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.657 [2024-05-15 13:54:22.182417] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.657 [2024-05-15 13:54:22.182471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.657 [2024-05-15 13:54:22.182482] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.657 [2024-05-15 13:54:22.182490] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.657 [2024-05-15 13:54:22.182498] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.657 [2024-05-15 13:54:22.182525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.226 13:54:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:24.226 13:54:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:24.226 13:54:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.226 13:54:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.226 13:54:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.486 13:54:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.486 13:54:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.486 [2024-05-15 13:54:22.984295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.486 ************************************ 00:16:24.486 START TEST lvs_grow_clean 00:16:24.486 ************************************ 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:24.486 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:24.827 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:24.827 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:25.087 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb261f89-f483-4312-a8cb-875ca645303d 00:16:25.087 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:25.087 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eb261f89-f483-4312-a8cb-875ca645303d lvol 150 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cf2475e-179b-4698-86ea-355f730b3f59 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:25.347 13:54:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:25.615 [2024-05-15 13:54:24.043163] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:25.615 [2024-05-15 13:54:24.043232] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:25.615 true 00:16:25.615 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:25.615 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:25.875 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:25.875 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:26.133 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cf2475e-179b-4698-86ea-355f730b3f59 00:16:26.133 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:26.393 [2024-05-15 13:54:24.846144] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:26.393 [2024-05-15 13:54:24.846392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.393 13:54:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65037 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65037 /var/tmp/bdevperf.sock 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 65037 ']' 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.652 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:26.652 [2024-05-15 13:54:25.103778] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:26.652 [2024-05-15 13:54:25.103856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65037 ] 00:16:26.910 [2024-05-15 13:54:25.243504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.910 [2024-05-15 13:54:25.349967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.478 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.478 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:27.478 13:54:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:27.737 Nvme0n1 00:16:27.996 13:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:27.996 [ 00:16:27.996 { 00:16:27.996 "name": "Nvme0n1", 00:16:27.996 "aliases": [ 00:16:27.996 "1cf2475e-179b-4698-86ea-355f730b3f59" 00:16:27.996 ], 00:16:27.996 "product_name": "NVMe disk", 00:16:27.996 "block_size": 4096, 00:16:27.996 "num_blocks": 38912, 00:16:27.996 "uuid": "1cf2475e-179b-4698-86ea-355f730b3f59", 00:16:27.996 "assigned_rate_limits": { 00:16:27.996 "rw_ios_per_sec": 0, 00:16:27.996 "rw_mbytes_per_sec": 0, 00:16:27.996 "r_mbytes_per_sec": 0, 00:16:27.996 "w_mbytes_per_sec": 0 00:16:27.996 }, 00:16:27.996 "claimed": false, 00:16:27.996 "zoned": false, 00:16:27.996 "supported_io_types": { 00:16:27.996 "read": true, 00:16:27.996 "write": true, 00:16:27.996 "unmap": true, 00:16:27.996 "write_zeroes": true, 00:16:27.996 "flush": true, 00:16:27.996 "reset": true, 00:16:27.996 "compare": true, 00:16:27.996 "compare_and_write": true, 00:16:27.996 "abort": true, 00:16:27.996 "nvme_admin": true, 00:16:27.996 "nvme_io": true 00:16:27.996 }, 00:16:27.996 "memory_domains": [ 00:16:27.996 { 00:16:27.996 "dma_device_id": "system", 00:16:27.996 "dma_device_type": 1 00:16:27.996 } 00:16:27.996 ], 00:16:27.996 "driver_specific": { 00:16:27.996 "nvme": [ 00:16:27.996 { 00:16:27.996 "trid": { 00:16:27.996 "trtype": "TCP", 00:16:27.996 "adrfam": "IPv4", 00:16:27.996 "traddr": "10.0.0.2", 00:16:27.996 "trsvcid": "4420", 00:16:27.996 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.996 }, 00:16:27.996 "ctrlr_data": { 00:16:27.996 "cntlid": 1, 00:16:27.996 "vendor_id": "0x8086", 00:16:27.996 "model_number": "SPDK bdev Controller", 00:16:27.996 "serial_number": "SPDK0", 00:16:27.996 "firmware_revision": "24.05", 00:16:27.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.996 "oacs": { 00:16:27.996 "security": 0, 00:16:27.996 "format": 0, 00:16:27.996 "firmware": 0, 00:16:27.996 "ns_manage": 0 00:16:27.996 }, 00:16:27.996 "multi_ctrlr": true, 00:16:27.996 "ana_reporting": false 00:16:27.996 }, 00:16:27.996 "vs": { 00:16:27.996 "nvme_version": "1.3" 00:16:27.996 }, 00:16:27.997 "ns_data": { 00:16:27.997 "id": 1, 00:16:27.997 "can_share": true 00:16:27.997 } 00:16:27.997 } 00:16:27.997 ], 00:16:27.997 "mp_policy": "active_passive" 00:16:27.997 } 00:16:27.997 } 00:16:27.997 ] 00:16:27.997 13:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65061 00:16:27.997 13:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:27.997 13:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:28.260 Running I/O for 10 seconds... 00:16:29.206 Latency(us) 00:16:29.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.206 Nvme0n1 : 1.00 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:16:29.206 =================================================================================================================== 00:16:29.206 Total : 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:16:29.206 00:16:30.142 13:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:30.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.142 Nvme0n1 : 2.00 9206.50 35.96 0.00 0.00 0.00 0.00 0.00 00:16:30.142 =================================================================================================================== 00:16:30.142 Total : 9206.50 35.96 0.00 0.00 0.00 0.00 0.00 00:16:30.142 00:16:30.400 true 00:16:30.400 13:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:30.400 13:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:30.659 13:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:30.659 13:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:30.659 13:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65061 00:16:31.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.264 Nvme0n1 : 3.00 9303.67 36.34 0.00 0.00 0.00 0.00 0.00 00:16:31.264 =================================================================================================================== 00:16:31.264 Total : 9303.67 36.34 0.00 0.00 0.00 0.00 0.00 00:16:31.264 00:16:32.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.246 Nvme0n1 : 4.00 9295.50 36.31 0.00 0.00 0.00 0.00 0.00 00:16:32.246 =================================================================================================================== 00:16:32.246 Total : 9295.50 36.31 0.00 0.00 0.00 0.00 0.00 00:16:32.246 00:16:33.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.181 Nvme0n1 : 5.00 9290.60 36.29 0.00 0.00 0.00 0.00 0.00 00:16:33.181 =================================================================================================================== 00:16:33.181 Total : 9290.60 36.29 0.00 0.00 0.00 0.00 0.00 00:16:33.181 00:16:34.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.117 Nvme0n1 : 6.00 9263.33 36.18 0.00 0.00 0.00 0.00 0.00 00:16:34.117 =================================================================================================================== 00:16:34.117 Total : 9263.33 36.18 0.00 0.00 0.00 0.00 0.00 00:16:34.117 00:16:35.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.052 Nvme0n1 : 7.00 9210.00 35.98 0.00 0.00 0.00 0.00 0.00 00:16:35.052 =================================================================================================================== 00:16:35.052 Total : 9210.00 35.98 0.00 0.00 0.00 0.00 0.00 00:16:35.052 00:16:36.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.429 Nvme0n1 : 8.00 9217.62 36.01 0.00 0.00 0.00 0.00 0.00 00:16:36.429 =================================================================================================================== 00:16:36.429 Total : 9217.62 36.01 0.00 0.00 0.00 0.00 0.00 00:16:36.429 00:16:37.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.365 Nvme0n1 : 9.00 9222.00 36.02 0.00 0.00 0.00 0.00 0.00 00:16:37.365 =================================================================================================================== 00:16:37.365 Total : 9222.00 36.02 0.00 0.00 0.00 0.00 0.00 00:16:37.365 00:16:38.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.304 Nvme0n1 : 10.00 9226.90 36.04 0.00 0.00 0.00 0.00 0.00 00:16:38.304 =================================================================================================================== 00:16:38.304 Total : 9226.90 36.04 0.00 0.00 0.00 0.00 0.00 00:16:38.304 00:16:38.304 00:16:38.304 Latency(us) 00:16:38.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.304 Nvme0n1 : 10.01 9232.56 36.06 0.00 0.00 13859.86 10001.48 34320.86 00:16:38.304 =================================================================================================================== 00:16:38.304 Total : 9232.56 36.06 0.00 0.00 13859.86 10001.48 34320.86 00:16:38.304 0 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65037 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 65037 ']' 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 65037 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65037 00:16:38.304 killing process with pid 65037 00:16:38.304 Received shutdown signal, test time was about 10.000000 seconds 00:16:38.304 00:16:38.304 Latency(us) 00:16:38.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.304 =================================================================================================================== 00:16:38.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65037' 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 65037 00:16:38.304 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 65037 00:16:38.562 13:54:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:38.562 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:38.820 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:38.820 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:39.079 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:39.079 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:39.079 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:39.337 [2024-05-15 13:54:37.652530] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:39.337 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:39.338 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:39.338 request: 00:16:39.338 { 00:16:39.338 "uuid": "eb261f89-f483-4312-a8cb-875ca645303d", 00:16:39.338 "method": "bdev_lvol_get_lvstores", 00:16:39.338 "req_id": 1 00:16:39.338 } 00:16:39.338 Got JSON-RPC error response 00:16:39.338 response: 00:16:39.338 { 00:16:39.338 "code": -19, 00:16:39.338 "message": "No such device" 00:16:39.338 } 00:16:39.597 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:39.597 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:39.597 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:39.597 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:39.597 13:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:39.597 aio_bdev 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cf2475e-179b-4698-86ea-355f730b3f59 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=1cf2475e-179b-4698-86ea-355f730b3f59 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:39.597 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.856 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1cf2475e-179b-4698-86ea-355f730b3f59 -t 2000 00:16:40.115 [ 00:16:40.115 { 00:16:40.115 "name": "1cf2475e-179b-4698-86ea-355f730b3f59", 00:16:40.115 "aliases": [ 00:16:40.115 "lvs/lvol" 00:16:40.115 ], 00:16:40.115 "product_name": "Logical Volume", 00:16:40.115 "block_size": 4096, 00:16:40.115 "num_blocks": 38912, 00:16:40.115 "uuid": "1cf2475e-179b-4698-86ea-355f730b3f59", 00:16:40.115 "assigned_rate_limits": { 00:16:40.115 "rw_ios_per_sec": 0, 00:16:40.115 "rw_mbytes_per_sec": 0, 00:16:40.115 "r_mbytes_per_sec": 0, 00:16:40.115 "w_mbytes_per_sec": 0 00:16:40.115 }, 00:16:40.115 "claimed": false, 00:16:40.115 "zoned": false, 00:16:40.115 "supported_io_types": { 00:16:40.115 "read": true, 00:16:40.115 "write": true, 00:16:40.115 "unmap": true, 00:16:40.115 "write_zeroes": true, 00:16:40.115 "flush": false, 00:16:40.115 "reset": true, 00:16:40.115 "compare": false, 00:16:40.115 "compare_and_write": false, 00:16:40.115 "abort": false, 00:16:40.115 "nvme_admin": false, 00:16:40.115 "nvme_io": false 00:16:40.115 }, 00:16:40.115 "driver_specific": { 00:16:40.115 "lvol": { 00:16:40.115 "lvol_store_uuid": "eb261f89-f483-4312-a8cb-875ca645303d", 00:16:40.115 "base_bdev": "aio_bdev", 00:16:40.115 "thin_provision": false, 00:16:40.115 "num_allocated_clusters": 38, 00:16:40.115 "snapshot": false, 00:16:40.115 "clone": false, 00:16:40.115 "esnap_clone": false 00:16:40.115 } 00:16:40.115 } 00:16:40.115 } 00:16:40.115 ] 00:16:40.115 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:40.115 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:40.115 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:40.374 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:40.374 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:40.374 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:40.374 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:40.374 13:54:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1cf2475e-179b-4698-86ea-355f730b3f59 00:16:40.633 13:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb261f89-f483-4312-a8cb-875ca645303d 00:16:40.892 13:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:41.169 13:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:41.463 ************************************ 00:16:41.463 END TEST lvs_grow_clean 00:16:41.463 ************************************ 00:16:41.463 00:16:41.463 real 0m16.967s 00:16:41.463 user 0m15.114s 00:16:41.463 sys 0m3.077s 00:16:41.463 13:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:41.463 13:54:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:41.722 ************************************ 00:16:41.722 START TEST lvs_grow_dirty 00:16:41.722 ************************************ 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:41.722 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:41.981 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:41.981 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:41.981 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:41.981 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:42.240 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:42.240 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:42.240 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 lvol 150 00:16:42.499 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:42.499 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:42.499 13:54:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:42.758 [2024-05-15 13:54:41.173883] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:42.758 [2024-05-15 13:54:41.173956] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:42.758 true 00:16:42.758 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:42.758 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:43.017 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:43.017 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:43.301 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:43.301 13:54:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:43.559 [2024-05-15 13:54:41.989063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.559 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.817 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65299 00:16:43.817 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65299 /var/tmp/bdevperf.sock 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 65299 ']' 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:43.818 13:54:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:43.818 [2024-05-15 13:54:42.255150] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:43.818 [2024-05-15 13:54:42.255457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65299 ] 00:16:44.076 [2024-05-15 13:54:42.395004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.076 [2024-05-15 13:54:42.506060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.641 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:44.641 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:44.641 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:44.899 Nvme0n1 00:16:44.899 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:45.156 [ 00:16:45.156 { 00:16:45.156 "name": "Nvme0n1", 00:16:45.156 "aliases": [ 00:16:45.156 "24749265-5509-4b1d-97cd-0fc8b30409b7" 00:16:45.156 ], 00:16:45.156 "product_name": "NVMe disk", 00:16:45.156 "block_size": 4096, 00:16:45.156 "num_blocks": 38912, 00:16:45.156 "uuid": "24749265-5509-4b1d-97cd-0fc8b30409b7", 00:16:45.156 "assigned_rate_limits": { 00:16:45.156 "rw_ios_per_sec": 0, 00:16:45.156 "rw_mbytes_per_sec": 0, 00:16:45.156 "r_mbytes_per_sec": 0, 00:16:45.156 "w_mbytes_per_sec": 0 00:16:45.156 }, 00:16:45.156 "claimed": false, 00:16:45.156 "zoned": false, 00:16:45.156 "supported_io_types": { 00:16:45.156 "read": true, 00:16:45.156 "write": true, 00:16:45.156 "unmap": true, 00:16:45.156 "write_zeroes": true, 00:16:45.156 "flush": true, 00:16:45.156 "reset": true, 00:16:45.156 "compare": true, 00:16:45.156 "compare_and_write": true, 00:16:45.156 "abort": true, 00:16:45.156 "nvme_admin": true, 00:16:45.156 "nvme_io": true 00:16:45.156 }, 00:16:45.156 "memory_domains": [ 00:16:45.156 { 00:16:45.156 "dma_device_id": "system", 00:16:45.156 "dma_device_type": 1 00:16:45.156 } 00:16:45.156 ], 00:16:45.156 "driver_specific": { 00:16:45.156 "nvme": [ 00:16:45.156 { 00:16:45.156 "trid": { 00:16:45.156 "trtype": "TCP", 00:16:45.156 "adrfam": "IPv4", 00:16:45.156 "traddr": "10.0.0.2", 00:16:45.156 "trsvcid": "4420", 00:16:45.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:45.156 }, 00:16:45.156 "ctrlr_data": { 00:16:45.156 "cntlid": 1, 00:16:45.156 "vendor_id": "0x8086", 00:16:45.156 "model_number": "SPDK bdev Controller", 00:16:45.156 "serial_number": "SPDK0", 00:16:45.156 "firmware_revision": "24.05", 00:16:45.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:45.156 "oacs": { 00:16:45.156 "security": 0, 00:16:45.156 "format": 0, 00:16:45.156 "firmware": 0, 00:16:45.156 "ns_manage": 0 00:16:45.156 }, 00:16:45.156 "multi_ctrlr": true, 00:16:45.156 "ana_reporting": false 00:16:45.156 }, 00:16:45.156 "vs": { 00:16:45.156 "nvme_version": "1.3" 00:16:45.156 }, 00:16:45.156 "ns_data": { 00:16:45.156 "id": 1, 00:16:45.156 "can_share": true 00:16:45.156 } 00:16:45.156 } 00:16:45.156 ], 00:16:45.156 "mp_policy": "active_passive" 00:16:45.156 } 00:16:45.156 } 00:16:45.156 ] 00:16:45.156 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.156 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65317 00:16:45.156 13:54:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:45.156 Running I/O for 10 seconds... 00:16:46.528 Latency(us) 00:16:46.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.528 Nvme0n1 : 1.00 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:16:46.528 =================================================================================================================== 00:16:46.528 Total : 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:16:46.528 00:16:47.093 13:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:47.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.351 Nvme0n1 : 2.00 9641.50 37.66 0.00 0.00 0.00 0.00 0.00 00:16:47.351 =================================================================================================================== 00:16:47.351 Total : 9641.50 37.66 0.00 0.00 0.00 0.00 0.00 00:16:47.351 00:16:47.351 true 00:16:47.351 13:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:47.351 13:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:47.610 13:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:47.610 13:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:47.610 13:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65317 00:16:48.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.178 Nvme0n1 : 3.00 9687.33 37.84 0.00 0.00 0.00 0.00 0.00 00:16:48.178 =================================================================================================================== 00:16:48.178 Total : 9687.33 37.84 0.00 0.00 0.00 0.00 0.00 00:16:48.178 00:16:49.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.114 Nvme0n1 : 4.00 9676.75 37.80 0.00 0.00 0.00 0.00 0.00 00:16:49.114 =================================================================================================================== 00:16:49.114 Total : 9676.75 37.80 0.00 0.00 0.00 0.00 0.00 00:16:49.114 00:16:50.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.493 Nvme0n1 : 5.00 9646.40 37.68 0.00 0.00 0.00 0.00 0.00 00:16:50.493 =================================================================================================================== 00:16:50.493 Total : 9646.40 37.68 0.00 0.00 0.00 0.00 0.00 00:16:50.493 00:16:51.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.430 Nvme0n1 : 6.00 9613.00 37.55 0.00 0.00 0.00 0.00 0.00 00:16:51.430 =================================================================================================================== 00:16:51.430 Total : 9613.00 37.55 0.00 0.00 0.00 0.00 0.00 00:16:51.430 00:16:52.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.367 Nvme0n1 : 7.00 9564.14 37.36 0.00 0.00 0.00 0.00 0.00 00:16:52.367 =================================================================================================================== 00:16:52.367 Total : 9564.14 37.36 0.00 0.00 0.00 0.00 0.00 00:16:52.367 00:16:53.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.304 Nvme0n1 : 8.00 9527.50 37.22 0.00 0.00 0.00 0.00 0.00 00:16:53.304 =================================================================================================================== 00:16:53.304 Total : 9527.50 37.22 0.00 0.00 0.00 0.00 0.00 00:16:53.304 00:16:54.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.242 Nvme0n1 : 9.00 9058.11 35.38 0.00 0.00 0.00 0.00 0.00 00:16:54.242 =================================================================================================================== 00:16:54.242 Total : 9058.11 35.38 0.00 0.00 0.00 0.00 0.00 00:16:54.242 00:16:55.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.178 Nvme0n1 : 10.00 9028.60 35.27 0.00 0.00 0.00 0.00 0.00 00:16:55.178 =================================================================================================================== 00:16:55.178 Total : 9028.60 35.27 0.00 0.00 0.00 0.00 0.00 00:16:55.178 00:16:55.178 00:16:55.178 Latency(us) 00:16:55.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.178 Nvme0n1 : 10.01 9036.42 35.30 0.00 0.00 14160.96 7264.23 404270.27 00:16:55.178 =================================================================================================================== 00:16:55.178 Total : 9036.42 35.30 0.00 0.00 14160.96 7264.23 404270.27 00:16:55.178 0 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65299 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 65299 ']' 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 65299 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65299 00:16:55.178 killing process with pid 65299 00:16:55.178 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.178 00:16:55.178 Latency(us) 00:16:55.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.178 =================================================================================================================== 00:16:55.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65299' 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 65299 00:16:55.178 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 65299 00:16:55.436 13:54:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.693 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:55.952 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:55.952 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64960 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64960 00:16:56.210 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64960 Killed "${NVMF_APP[@]}" "$@" 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:56.210 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65450 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65450 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 65450 ']' 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.211 13:54:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:56.211 [2024-05-15 13:54:54.695321] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:16:56.211 [2024-05-15 13:54:54.695406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.468 [2024-05-15 13:54:54.840117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.468 [2024-05-15 13:54:54.944325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.468 [2024-05-15 13:54:54.944380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.468 [2024-05-15 13:54:54.944389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.468 [2024-05-15 13:54:54.944398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.468 [2024-05-15 13:54:54.944404] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.468 [2024-05-15 13:54:54.944430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.034 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.035 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:57.292 [2024-05-15 13:54:55.818844] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:57.292 [2024-05-15 13:54:55.819351] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:57.292 [2024-05-15 13:54:55.819970] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:57.549 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:57.549 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:57.549 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:57.549 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:57.550 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:57.550 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:57.550 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:57.550 13:54:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:57.808 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24749265-5509-4b1d-97cd-0fc8b30409b7 -t 2000 00:16:57.808 [ 00:16:57.808 { 00:16:57.808 "name": "24749265-5509-4b1d-97cd-0fc8b30409b7", 00:16:57.808 "aliases": [ 00:16:57.808 "lvs/lvol" 00:16:57.808 ], 00:16:57.808 "product_name": "Logical Volume", 00:16:57.808 "block_size": 4096, 00:16:57.808 "num_blocks": 38912, 00:16:57.808 "uuid": "24749265-5509-4b1d-97cd-0fc8b30409b7", 00:16:57.808 "assigned_rate_limits": { 00:16:57.808 "rw_ios_per_sec": 0, 00:16:57.808 "rw_mbytes_per_sec": 0, 00:16:57.808 "r_mbytes_per_sec": 0, 00:16:57.808 "w_mbytes_per_sec": 0 00:16:57.808 }, 00:16:57.808 "claimed": false, 00:16:57.808 "zoned": false, 00:16:57.808 "supported_io_types": { 00:16:57.808 "read": true, 00:16:57.808 "write": true, 00:16:57.808 "unmap": true, 00:16:57.808 "write_zeroes": true, 00:16:57.808 "flush": false, 00:16:57.808 "reset": true, 00:16:57.808 "compare": false, 00:16:57.808 "compare_and_write": false, 00:16:57.808 "abort": false, 00:16:57.808 "nvme_admin": false, 00:16:57.808 "nvme_io": false 00:16:57.808 }, 00:16:57.808 "driver_specific": { 00:16:57.808 "lvol": { 00:16:57.808 "lvol_store_uuid": "3252dc56-1198-4b4a-9c1a-fc5bdd118e83", 00:16:57.808 "base_bdev": "aio_bdev", 00:16:57.808 "thin_provision": false, 00:16:57.808 "num_allocated_clusters": 38, 00:16:57.808 "snapshot": false, 00:16:57.808 "clone": false, 00:16:57.808 "esnap_clone": false 00:16:57.808 } 00:16:57.808 } 00:16:57.808 } 00:16:57.808 ] 00:16:57.808 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:57.808 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:57.808 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:58.067 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:58.067 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:58.067 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:58.325 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:58.325 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:58.610 [2024-05-15 13:54:56.953187] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:58.610 13:54:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:58.868 request: 00:16:58.868 { 00:16:58.868 "uuid": "3252dc56-1198-4b4a-9c1a-fc5bdd118e83", 00:16:58.868 "method": "bdev_lvol_get_lvstores", 00:16:58.868 "req_id": 1 00:16:58.868 } 00:16:58.868 Got JSON-RPC error response 00:16:58.868 response: 00:16:58.868 { 00:16:58.868 "code": -19, 00:16:58.868 "message": "No such device" 00:16:58.868 } 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:58.868 aio_bdev 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:58.868 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:59.126 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24749265-5509-4b1d-97cd-0fc8b30409b7 -t 2000 00:16:59.384 [ 00:16:59.384 { 00:16:59.384 "name": "24749265-5509-4b1d-97cd-0fc8b30409b7", 00:16:59.384 "aliases": [ 00:16:59.384 "lvs/lvol" 00:16:59.384 ], 00:16:59.384 "product_name": "Logical Volume", 00:16:59.384 "block_size": 4096, 00:16:59.384 "num_blocks": 38912, 00:16:59.384 "uuid": "24749265-5509-4b1d-97cd-0fc8b30409b7", 00:16:59.384 "assigned_rate_limits": { 00:16:59.384 "rw_ios_per_sec": 0, 00:16:59.384 "rw_mbytes_per_sec": 0, 00:16:59.384 "r_mbytes_per_sec": 0, 00:16:59.384 "w_mbytes_per_sec": 0 00:16:59.384 }, 00:16:59.384 "claimed": false, 00:16:59.384 "zoned": false, 00:16:59.384 "supported_io_types": { 00:16:59.384 "read": true, 00:16:59.384 "write": true, 00:16:59.384 "unmap": true, 00:16:59.384 "write_zeroes": true, 00:16:59.384 "flush": false, 00:16:59.384 "reset": true, 00:16:59.384 "compare": false, 00:16:59.384 "compare_and_write": false, 00:16:59.384 "abort": false, 00:16:59.384 "nvme_admin": false, 00:16:59.384 "nvme_io": false 00:16:59.384 }, 00:16:59.384 "driver_specific": { 00:16:59.384 "lvol": { 00:16:59.384 "lvol_store_uuid": "3252dc56-1198-4b4a-9c1a-fc5bdd118e83", 00:16:59.384 "base_bdev": "aio_bdev", 00:16:59.384 "thin_provision": false, 00:16:59.384 "num_allocated_clusters": 38, 00:16:59.384 "snapshot": false, 00:16:59.384 "clone": false, 00:16:59.384 "esnap_clone": false 00:16:59.384 } 00:16:59.384 } 00:16:59.384 } 00:16:59.384 ] 00:16:59.384 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:59.384 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:59.384 13:54:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:59.643 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:59.643 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:16:59.643 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:59.905 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:59.905 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 24749265-5509-4b1d-97cd-0fc8b30409b7 00:16:59.905 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3252dc56-1198-4b4a-9c1a-fc5bdd118e83 00:17:00.163 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:00.421 13:54:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:00.987 ************************************ 00:17:00.987 END TEST lvs_grow_dirty 00:17:00.987 ************************************ 00:17:00.987 00:17:00.987 real 0m19.213s 00:17:00.987 user 0m38.308s 00:17:00.987 sys 0m8.119s 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:00.987 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:00.988 nvmf_trace.0 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.988 rmmod nvme_tcp 00:17:00.988 rmmod nvme_fabrics 00:17:00.988 rmmod nvme_keyring 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65450 ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65450 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 65450 ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 65450 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65450 00:17:00.988 killing process with pid 65450 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65450' 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 65450 00:17:00.988 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 65450 00:17:01.245 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.246 13:54:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:01.246 00:17:01.246 real 0m38.508s 00:17:01.246 user 0m58.965s 00:17:01.246 sys 0m12.001s 00:17:01.504 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:01.504 13:54:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:01.504 ************************************ 00:17:01.504 END TEST nvmf_lvs_grow 00:17:01.504 ************************************ 00:17:01.504 13:54:59 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:01.504 13:54:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:01.504 13:54:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:01.504 13:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.504 ************************************ 00:17:01.504 START TEST nvmf_bdev_io_wait 00:17:01.504 ************************************ 00:17:01.504 13:54:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:01.504 * Looking for test storage... 00:17:01.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:01.504 13:54:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.504 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.505 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:01.764 Cannot find device "nvmf_tgt_br" 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.764 Cannot find device "nvmf_tgt_br2" 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:01.764 Cannot find device "nvmf_tgt_br" 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:01.764 Cannot find device "nvmf_tgt_br2" 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.764 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:02.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:17:02.022 00:17:02.022 --- 10.0.0.2 ping statistics --- 00:17:02.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.022 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:02.022 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.022 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:17:02.022 00:17:02.022 --- 10.0.0.3 ping statistics --- 00:17:02.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.022 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:02.022 00:17:02.022 --- 10.0.0.1 ping statistics --- 00:17:02.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.022 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65764 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65764 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 65764 ']' 00:17:02.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.022 13:55:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.022 [2024-05-15 13:55:00.509245] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:02.022 [2024-05-15 13:55:00.509327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.280 [2024-05-15 13:55:00.653289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.280 [2024-05-15 13:55:00.758239] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.280 [2024-05-15 13:55:00.758294] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.280 [2024-05-15 13:55:00.758304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.280 [2024-05-15 13:55:00.758312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.280 [2024-05-15 13:55:00.758319] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.280 [2024-05-15 13:55:00.758410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.280 [2024-05-15 13:55:00.758783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.280 [2024-05-15 13:55:00.759120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.280 [2024-05-15 13:55:00.759124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.846 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.846 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:02.846 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.846 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.846 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 [2024-05-15 13:55:01.488506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 Malloc0 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 [2024-05-15 13:55:01.563108] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:03.105 [2024-05-15 13:55:01.563388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65799 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65801 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:03.105 { 00:17:03.105 "params": { 00:17:03.105 "name": "Nvme$subsystem", 00:17:03.105 "trtype": "$TEST_TRANSPORT", 00:17:03.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.105 "adrfam": "ipv4", 00:17:03.105 "trsvcid": "$NVMF_PORT", 00:17:03.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.105 "hdgst": ${hdgst:-false}, 00:17:03.105 "ddgst": ${ddgst:-false} 00:17:03.105 }, 00:17:03.105 "method": "bdev_nvme_attach_controller" 00:17:03.105 } 00:17:03.105 EOF 00:17:03.105 )") 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65803 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:03.105 { 00:17:03.105 "params": { 00:17:03.105 "name": "Nvme$subsystem", 00:17:03.105 "trtype": "$TEST_TRANSPORT", 00:17:03.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.105 "adrfam": "ipv4", 00:17:03.105 "trsvcid": "$NVMF_PORT", 00:17:03.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.105 "hdgst": ${hdgst:-false}, 00:17:03.105 "ddgst": ${ddgst:-false} 00:17:03.105 }, 00:17:03.105 "method": "bdev_nvme_attach_controller" 00:17:03.105 } 00:17:03.105 EOF 00:17:03.105 )") 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65806 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:03.105 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:03.105 { 00:17:03.105 "params": { 00:17:03.105 "name": "Nvme$subsystem", 00:17:03.105 "trtype": "$TEST_TRANSPORT", 00:17:03.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.105 "adrfam": "ipv4", 00:17:03.105 "trsvcid": "$NVMF_PORT", 00:17:03.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.105 "hdgst": ${hdgst:-false}, 00:17:03.105 "ddgst": ${ddgst:-false} 00:17:03.105 }, 00:17:03.105 "method": "bdev_nvme_attach_controller" 00:17:03.105 } 00:17:03.105 EOF 00:17:03.105 )") 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:03.106 { 00:17:03.106 "params": { 00:17:03.106 "name": "Nvme$subsystem", 00:17:03.106 "trtype": "$TEST_TRANSPORT", 00:17:03.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.106 "adrfam": "ipv4", 00:17:03.106 "trsvcid": "$NVMF_PORT", 00:17:03.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.106 "hdgst": ${hdgst:-false}, 00:17:03.106 "ddgst": ${ddgst:-false} 00:17:03.106 }, 00:17:03.106 "method": "bdev_nvme_attach_controller" 00:17:03.106 } 00:17:03.106 EOF 00:17:03.106 )") 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:03.106 "params": { 00:17:03.106 "name": "Nvme1", 00:17:03.106 "trtype": "tcp", 00:17:03.106 "traddr": "10.0.0.2", 00:17:03.106 "adrfam": "ipv4", 00:17:03.106 "trsvcid": "4420", 00:17:03.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.106 "hdgst": false, 00:17:03.106 "ddgst": false 00:17:03.106 }, 00:17:03.106 "method": "bdev_nvme_attach_controller" 00:17:03.106 }' 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:03.106 "params": { 00:17:03.106 "name": "Nvme1", 00:17:03.106 "trtype": "tcp", 00:17:03.106 "traddr": "10.0.0.2", 00:17:03.106 "adrfam": "ipv4", 00:17:03.106 "trsvcid": "4420", 00:17:03.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.106 "hdgst": false, 00:17:03.106 "ddgst": false 00:17:03.106 }, 00:17:03.106 "method": "bdev_nvme_attach_controller" 00:17:03.106 }' 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:03.106 "params": { 00:17:03.106 "name": "Nvme1", 00:17:03.106 "trtype": "tcp", 00:17:03.106 "traddr": "10.0.0.2", 00:17:03.106 "adrfam": "ipv4", 00:17:03.106 "trsvcid": "4420", 00:17:03.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.106 "hdgst": false, 00:17:03.106 "ddgst": false 00:17:03.106 }, 00:17:03.106 "method": "bdev_nvme_attach_controller" 00:17:03.106 }' 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:03.106 "params": { 00:17:03.106 "name": "Nvme1", 00:17:03.106 "trtype": "tcp", 00:17:03.106 "traddr": "10.0.0.2", 00:17:03.106 "adrfam": "ipv4", 00:17:03.106 "trsvcid": "4420", 00:17:03.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.106 "hdgst": false, 00:17:03.106 "ddgst": false 00:17:03.106 }, 00:17:03.106 "method": "bdev_nvme_attach_controller" 00:17:03.106 }' 00:17:03.106 [2024-05-15 13:55:01.618022] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:03.106 [2024-05-15 13:55:01.618084] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:03.106 [2024-05-15 13:55:01.625775] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:03.106 [2024-05-15 13:55:01.625831] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:03.106 13:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65799 00:17:03.106 [2024-05-15 13:55:01.640211] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:03.106 [2024-05-15 13:55:01.640282] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:03.106 [2024-05-15 13:55:01.642141] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:03.106 [2024-05-15 13:55:01.642345] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:03.364 [2024-05-15 13:55:01.812247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.364 [2024-05-15 13:55:01.870706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.364 [2024-05-15 13:55:01.897380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.622 [2024-05-15 13:55:01.929497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.622 [2024-05-15 13:55:01.955911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.622 [2024-05-15 13:55:01.998687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.622 [2024-05-15 13:55:02.013379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:03.622 Running I/O for 1 seconds... 00:17:03.622 Running I/O for 1 seconds... 00:17:03.622 [2024-05-15 13:55:02.084327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:03.622 Running I/O for 1 seconds... 00:17:03.883 Running I/O for 1 seconds... 00:17:04.817 00:17:04.818 Latency(us) 00:17:04.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.818 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:04.818 Nvme1n1 : 1.02 7273.04 28.41 0.00 0.00 17378.62 7895.90 33057.52 00:17:04.818 =================================================================================================================== 00:17:04.818 Total : 7273.04 28.41 0.00 0.00 17378.62 7895.90 33057.52 00:17:04.818 00:17:04.818 Latency(us) 00:17:04.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.818 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:04.818 Nvme1n1 : 1.01 10130.55 39.57 0.00 0.00 12576.29 8632.85 24003.55 00:17:04.818 =================================================================================================================== 00:17:04.818 Total : 10130.55 39.57 0.00 0.00 12576.29 8632.85 24003.55 00:17:04.818 00:17:04.818 Latency(us) 00:17:04.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.818 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:04.818 Nvme1n1 : 1.00 215933.29 843.49 0.00 0.00 590.75 274.71 980.41 00:17:04.818 =================================================================================================================== 00:17:04.818 Total : 215933.29 843.49 0.00 0.00 590.75 274.71 980.41 00:17:04.818 00:17:04.818 Latency(us) 00:17:04.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.818 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:04.818 Nvme1n1 : 1.00 8190.56 31.99 0.00 0.00 15588.78 4211.15 42322.04 00:17:04.818 =================================================================================================================== 00:17:04.818 Total : 8190.56 31.99 0.00 0.00 15588.78 4211.15 42322.04 00:17:04.818 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65801 00:17:04.818 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65803 00:17:04.818 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65806 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.076 rmmod nvme_tcp 00:17:05.076 rmmod nvme_fabrics 00:17:05.076 rmmod nvme_keyring 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65764 ']' 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65764 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 65764 ']' 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 65764 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65764 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:05.076 killing process with pid 65764 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65764' 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 65764 00:17:05.076 [2024-05-15 13:55:03.621714] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:05.076 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 65764 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.334 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.592 00:17:05.592 real 0m4.019s 00:17:05.592 user 0m16.892s 00:17:05.592 sys 0m2.273s 00:17:05.592 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.592 13:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.592 ************************************ 00:17:05.592 END TEST nvmf_bdev_io_wait 00:17:05.592 ************************************ 00:17:05.592 13:55:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:05.592 13:55:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:05.592 13:55:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:05.592 13:55:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.592 ************************************ 00:17:05.592 START TEST nvmf_queue_depth 00:17:05.592 ************************************ 00:17:05.592 13:55:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:05.592 * Looking for test storage... 00:17:05.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.592 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.850 Cannot find device "nvmf_tgt_br" 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.850 Cannot find device "nvmf_tgt_br2" 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:05.850 Cannot find device "nvmf_tgt_br" 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:05.850 Cannot find device "nvmf_tgt_br2" 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.850 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:06.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:06.108 00:17:06.108 --- 10.0.0.2 ping statistics --- 00:17:06.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.108 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:06.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:06.108 00:17:06.108 --- 10.0.0.3 ping statistics --- 00:17:06.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.108 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:06.108 00:17:06.108 --- 10.0.0.1 ping statistics --- 00:17:06.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.108 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66038 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66038 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 66038 ']' 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.108 13:55:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 [2024-05-15 13:55:04.644364] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:06.108 [2024-05-15 13:55:04.644439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.366 [2024-05-15 13:55:04.786865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.366 [2024-05-15 13:55:04.884593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.366 [2024-05-15 13:55:04.884643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.366 [2024-05-15 13:55:04.884653] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.366 [2024-05-15 13:55:04.884661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.366 [2024-05-15 13:55:04.884668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.366 [2024-05-15 13:55:04.884698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.933 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.933 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:06.933 13:55:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.933 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.933 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 [2024-05-15 13:55:05.541303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 Malloc0 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.193 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.193 [2024-05-15 13:55:05.599755] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.194 [2024-05-15 13:55:05.599991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66070 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66070 /var/tmp/bdevperf.sock 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 66070 ']' 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.194 13:55:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:07.194 [2024-05-15 13:55:05.651974] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:07.194 [2024-05-15 13:55:05.652528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66070 ] 00:17:07.453 [2024-05-15 13:55:05.792367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.453 [2024-05-15 13:55:05.885987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:08.020 NVMe0n1 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.020 13:55:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:08.278 Running I/O for 10 seconds... 00:17:18.255 00:17:18.255 Latency(us) 00:17:18.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.255 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:18.255 Verification LBA range: start 0x0 length 0x4000 00:17:18.255 NVMe0n1 : 10.07 10548.81 41.21 0.00 0.00 96673.56 19266.00 71168.41 00:17:18.255 =================================================================================================================== 00:17:18.255 Total : 10548.81 41.21 0.00 0.00 96673.56 19266.00 71168.41 00:17:18.255 0 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66070 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 66070 ']' 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 66070 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66070 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.255 killing process with pid 66070 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66070' 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 66070 00:17:18.255 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.255 00:17:18.255 Latency(us) 00:17:18.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.255 =================================================================================================================== 00:17:18.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.255 13:55:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 66070 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.822 rmmod nvme_tcp 00:17:18.822 rmmod nvme_fabrics 00:17:18.822 rmmod nvme_keyring 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66038 ']' 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66038 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 66038 ']' 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 66038 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66038 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:18.822 killing process with pid 66038 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66038' 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 66038 00:17:18.822 [2024-05-15 13:55:17.301813] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:18.822 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 66038 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.080 00:17:19.080 real 0m13.620s 00:17:19.080 user 0m22.923s 00:17:19.080 sys 0m2.624s 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:19.080 13:55:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:19.080 ************************************ 00:17:19.080 END TEST nvmf_queue_depth 00:17:19.080 ************************************ 00:17:19.339 13:55:17 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:19.339 13:55:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:19.339 13:55:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:19.339 13:55:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.339 ************************************ 00:17:19.339 START TEST nvmf_target_multipath 00:17:19.339 ************************************ 00:17:19.339 13:55:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:19.339 * Looking for test storage... 00:17:19.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.339 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.339 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:19.340 Cannot find device "nvmf_tgt_br" 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.340 Cannot find device "nvmf_tgt_br2" 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:17:19.340 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:19.599 Cannot find device "nvmf_tgt_br" 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:19.599 Cannot find device "nvmf_tgt_br2" 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:17:19.599 13:55:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.599 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:19.600 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:19.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:19.859 00:17:19.859 --- 10.0.0.2 ping statistics --- 00:17:19.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.859 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:19.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:19.859 00:17:19.859 --- 10.0.0.3 ping statistics --- 00:17:19.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.859 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:17:19.859 00:17:19.859 --- 10.0.0.1 ping statistics --- 00:17:19.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.859 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66385 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66385 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 66385 ']' 00:17:19.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.859 13:55:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:19.859 [2024-05-15 13:55:18.313176] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:19.859 [2024-05-15 13:55:18.313272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.118 [2024-05-15 13:55:18.457966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.118 [2024-05-15 13:55:18.620106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.118 [2024-05-15 13:55:18.620166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.118 [2024-05-15 13:55:18.620177] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.118 [2024-05-15 13:55:18.620186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.118 [2024-05-15 13:55:18.620193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.118 [2024-05-15 13:55:18.620327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.118 [2024-05-15 13:55:18.620412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.118 [2024-05-15 13:55:18.621572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.118 [2024-05-15 13:55:18.621580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.684 13:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:20.942 [2024-05-15 13:55:19.392585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.943 13:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:21.201 Malloc0 00:17:21.201 13:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:17:21.459 13:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.715 13:55:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.715 [2024-05-15 13:55:20.247278] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:21.715 [2024-05-15 13:55:20.247712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.715 13:55:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:21.972 [2024-05-15 13:55:20.439526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:21.972 13:55:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:17:22.230 13:55:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66474 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:24.762 13:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:17:24.762 [global] 00:17:24.762 thread=1 00:17:24.762 invalidate=1 00:17:24.762 rw=randrw 00:17:24.762 time_based=1 00:17:24.762 runtime=6 00:17:24.762 ioengine=libaio 00:17:24.762 direct=1 00:17:24.762 bs=4096 00:17:24.762 iodepth=128 00:17:24.762 norandommap=0 00:17:24.762 numjobs=1 00:17:24.762 00:17:24.762 verify_dump=1 00:17:24.762 verify_backlog=512 00:17:24.762 verify_state_save=0 00:17:24.762 do_verify=1 00:17:24.762 verify=crc32c-intel 00:17:24.762 [job0] 00:17:24.762 filename=/dev/nvme0n1 00:17:24.762 Could not set queue depth (nvme0n1) 00:17:24.762 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:24.762 fio-3.35 00:17:24.762 Starting 1 thread 00:17:25.328 13:55:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:25.587 13:55:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:25.845 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:25.846 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:25.846 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:25.846 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:25.846 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:25.846 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:26.103 13:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66474 00:17:31.376 00:17:31.376 job0: (groupid=0, jobs=1): err= 0: pid=66500: Wed May 15 13:55:29 2024 00:17:31.376 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(294MiB/6004msec) 00:17:31.376 slat (usec): min=5, max=6231, avg=43.76, stdev=158.08 00:17:31.376 clat (usec): min=864, max=26658, avg=7061.19, stdev=1587.34 00:17:31.376 lat (usec): min=948, max=28045, avg=7104.95, stdev=1595.78 00:17:31.376 clat percentiles (usec): 00:17:31.376 | 1.00th=[ 4047], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6259], 00:17:31.376 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:17:31.376 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8455], 95.00th=[10159], 00:17:31.376 | 99.00th=[12649], 99.50th=[13698], 99.90th=[22414], 99.95th=[24249], 00:17:31.376 | 99.99th=[26084] 00:17:31.376 bw ( KiB/s): min=11176, max=34512, per=52.76%, avg=26485.64, stdev=6566.06, samples=11 00:17:31.376 iops : min= 2794, max= 8628, avg=6621.36, stdev=1641.54, samples=11 00:17:31.376 write: IOPS=7220, BW=28.2MiB/s (29.6MB/s)(147MiB/5223msec); 0 zone resets 00:17:31.376 slat (usec): min=11, max=1658, avg=57.05, stdev=97.56 00:17:31.376 clat (usec): min=804, max=26442, avg=6053.21, stdev=1350.27 00:17:31.376 lat (usec): min=889, max=26494, avg=6110.25, stdev=1353.83 00:17:31.376 clat percentiles (usec): 00:17:31.376 | 1.00th=[ 3458], 5.00th=[ 4080], 10.00th=[ 4555], 20.00th=[ 5211], 00:17:31.376 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6063], 60.00th=[ 6259], 00:17:31.376 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7111], 95.00th=[ 7832], 00:17:31.376 | 99.00th=[11207], 99.50th=[12387], 99.90th=[14746], 99.95th=[16450], 00:17:31.376 | 99.99th=[25822] 00:17:31.376 bw ( KiB/s): min=11496, max=34248, per=91.61%, avg=26460.73, stdev=6480.05, samples=11 00:17:31.376 iops : min= 2874, max= 8562, avg=6615.09, stdev=1620.12, samples=11 00:17:31.376 lat (usec) : 1000=0.01% 00:17:31.376 lat (msec) : 2=0.19%, 4=1.71%, 10=93.79%, 20=4.20%, 50=0.10% 00:17:31.376 cpu : usr=7.86%, sys=32.03%, ctx=7049, majf=0, minf=114 00:17:31.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:31.377 issued rwts: total=75349,37714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:31.377 00:17:31.377 Run status group 0 (all jobs): 00:17:31.377 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=294MiB (309MB), run=6004-6004msec 00:17:31.377 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=147MiB (154MB), run=5223-5223msec 00:17:31.377 00:17:31.377 Disk stats (read/write): 00:17:31.377 nvme0n1: ios=73753/37628, merge=0/0, ticks=478401/200831, in_queue=679232, util=98.66% 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66580 00:17:31.377 13:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:17:31.377 [global] 00:17:31.377 thread=1 00:17:31.377 invalidate=1 00:17:31.377 rw=randrw 00:17:31.377 time_based=1 00:17:31.377 runtime=6 00:17:31.377 ioengine=libaio 00:17:31.377 direct=1 00:17:31.377 bs=4096 00:17:31.377 iodepth=128 00:17:31.377 norandommap=0 00:17:31.377 numjobs=1 00:17:31.377 00:17:31.377 verify_dump=1 00:17:31.377 verify_backlog=512 00:17:31.377 verify_state_save=0 00:17:31.377 do_verify=1 00:17:31.377 verify=crc32c-intel 00:17:31.377 [job0] 00:17:31.377 filename=/dev/nvme0n1 00:17:31.377 Could not set queue depth (nvme0n1) 00:17:31.377 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:31.377 fio-3.35 00:17:31.377 Starting 1 thread 00:17:32.313 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:32.314 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:32.573 13:55:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:32.833 13:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66580 00:17:38.105 00:17:38.105 job0: (groupid=0, jobs=1): err= 0: pid=66601: Wed May 15 13:55:35 2024 00:17:38.105 read: IOPS=14.1k, BW=55.0MiB/s (57.7MB/s)(330MiB/6005msec) 00:17:38.105 slat (usec): min=3, max=8718, avg=35.24, stdev=139.70 00:17:38.105 clat (usec): min=1030, max=14605, avg=6292.80, stdev=1271.66 00:17:38.105 lat (usec): min=1046, max=14616, avg=6328.03, stdev=1280.25 00:17:38.105 clat percentiles (usec): 00:17:38.105 | 1.00th=[ 3425], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:17:38.105 | 30.00th=[ 5866], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6456], 00:17:38.105 | 70.00th=[ 6652], 80.00th=[ 6915], 90.00th=[ 7439], 95.00th=[ 8979], 00:17:38.105 | 99.00th=[10290], 99.50th=[10552], 99.90th=[11863], 99.95th=[14615], 00:17:38.105 | 99.99th=[14615] 00:17:38.105 bw ( KiB/s): min=14904, max=44782, per=51.06%, avg=28754.45, stdev=9347.37, samples=11 00:17:38.105 iops : min= 3726, max=11195, avg=7188.55, stdev=2336.74, samples=11 00:17:38.105 write: IOPS=8457, BW=33.0MiB/s (34.6MB/s)(169MiB/5130msec); 0 zone resets 00:17:38.105 slat (usec): min=4, max=3211, avg=47.72, stdev=88.58 00:17:38.105 clat (usec): min=723, max=11080, avg=5286.90, stdev=1200.92 00:17:38.105 lat (usec): min=802, max=11162, avg=5334.62, stdev=1208.14 00:17:38.105 clat percentiles (usec): 00:17:38.105 | 1.00th=[ 2573], 5.00th=[ 3195], 10.00th=[ 3621], 20.00th=[ 4146], 00:17:38.105 | 30.00th=[ 4686], 40.00th=[ 5211], 50.00th=[ 5538], 60.00th=[ 5735], 00:17:38.105 | 70.00th=[ 5932], 80.00th=[ 6194], 90.00th=[ 6456], 95.00th=[ 6783], 00:17:38.105 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10159], 99.95th=[10552], 00:17:38.105 | 99.99th=[10945] 00:17:38.105 bw ( KiB/s): min=15632, max=45125, per=85.05%, avg=28773.36, stdev=8987.29, samples=11 00:17:38.105 iops : min= 3908, max=11281, avg=7193.27, stdev=2246.75, samples=11 00:17:38.105 lat (usec) : 750=0.01%, 1000=0.01% 00:17:38.105 lat (msec) : 2=0.13%, 4=7.56%, 10=91.13%, 20=1.18% 00:17:38.105 cpu : usr=7.78%, sys=30.10%, ctx=8086, majf=0, minf=181 00:17:38.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:38.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:38.105 issued rwts: total=84537,43387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:38.105 00:17:38.105 Run status group 0 (all jobs): 00:17:38.105 READ: bw=55.0MiB/s (57.7MB/s), 55.0MiB/s-55.0MiB/s (57.7MB/s-57.7MB/s), io=330MiB (346MB), run=6005-6005msec 00:17:38.105 WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=169MiB (178MB), run=5130-5130msec 00:17:38.105 00:17:38.105 Disk stats (read/write): 00:17:38.105 nvme0n1: ios=83511/42576, merge=0/0, ticks=480422/196102, in_queue=676524, util=98.58% 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:17:38.105 13:55:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.105 rmmod nvme_tcp 00:17:38.105 rmmod nvme_fabrics 00:17:38.105 rmmod nvme_keyring 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66385 ']' 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66385 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 66385 ']' 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 66385 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66385 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66385' 00:17:38.105 killing process with pid 66385 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 66385 00:17:38.105 [2024-05-15 13:55:36.329368] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 66385 00:17:38.105 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:38.106 00:17:38.106 real 0m18.965s 00:17:38.106 user 1m9.173s 00:17:38.106 sys 0m11.280s 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.106 ************************************ 00:17:38.106 13:55:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:38.106 END TEST nvmf_target_multipath 00:17:38.106 ************************************ 00:17:38.365 13:55:36 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:38.365 13:55:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:38.365 13:55:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.365 13:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.365 ************************************ 00:17:38.365 START TEST nvmf_zcopy 00:17:38.365 ************************************ 00:17:38.365 13:55:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:38.365 * Looking for test storage... 00:17:38.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:38.365 13:55:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.365 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:38.365 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:38.366 Cannot find device "nvmf_tgt_br" 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.366 Cannot find device "nvmf_tgt_br2" 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:38.366 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:38.626 Cannot find device "nvmf_tgt_br" 00:17:38.626 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:17:38.626 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:38.626 Cannot find device "nvmf_tgt_br2" 00:17:38.626 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:17:38.626 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:38.626 13:55:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.626 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.885 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:38.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:38.885 00:17:38.885 --- 10.0.0.2 ping statistics --- 00:17:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.886 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:38.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:38.886 00:17:38.886 --- 10.0.0.3 ping statistics --- 00:17:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.886 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:38.886 00:17:38.886 --- 10.0.0.1 ping statistics --- 00:17:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.886 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66851 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66851 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 66851 ']' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.886 13:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:38.886 [2024-05-15 13:55:37.373084] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:38.886 [2024-05-15 13:55:37.373160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.205 [2024-05-15 13:55:37.502458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.205 [2024-05-15 13:55:37.604366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.205 [2024-05-15 13:55:37.604419] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.205 [2024-05-15 13:55:37.604429] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.205 [2024-05-15 13:55:37.604437] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.205 [2024-05-15 13:55:37.604444] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.205 [2024-05-15 13:55:37.604469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.774 [2024-05-15 13:55:38.287787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.774 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.775 [2024-05-15 13:55:38.311666] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:39.775 [2024-05-15 13:55:38.311872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.775 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:40.033 malloc0 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.033 { 00:17:40.033 "params": { 00:17:40.033 "name": "Nvme$subsystem", 00:17:40.033 "trtype": "$TEST_TRANSPORT", 00:17:40.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.033 "adrfam": "ipv4", 00:17:40.033 "trsvcid": "$NVMF_PORT", 00:17:40.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.033 "hdgst": ${hdgst:-false}, 00:17:40.033 "ddgst": ${ddgst:-false} 00:17:40.033 }, 00:17:40.033 "method": "bdev_nvme_attach_controller" 00:17:40.033 } 00:17:40.033 EOF 00:17:40.033 )") 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:40.033 13:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.033 "params": { 00:17:40.033 "name": "Nvme1", 00:17:40.033 "trtype": "tcp", 00:17:40.033 "traddr": "10.0.0.2", 00:17:40.033 "adrfam": "ipv4", 00:17:40.033 "trsvcid": "4420", 00:17:40.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.033 "hdgst": false, 00:17:40.033 "ddgst": false 00:17:40.033 }, 00:17:40.033 "method": "bdev_nvme_attach_controller" 00:17:40.033 }' 00:17:40.033 [2024-05-15 13:55:38.407510] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:40.033 [2024-05-15 13:55:38.407783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66884 ] 00:17:40.033 [2024-05-15 13:55:38.547823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.301 [2024-05-15 13:55:38.652287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.301 Running I/O for 10 seconds... 00:17:50.284 00:17:50.285 Latency(us) 00:17:50.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:50.285 Verification LBA range: start 0x0 length 0x1000 00:17:50.285 Nvme1n1 : 10.01 7972.85 62.29 0.00 0.00 16009.23 231.94 26109.12 00:17:50.285 =================================================================================================================== 00:17:50.285 Total : 7972.85 62.29 0.00 0.00 16009.23 231.94 26109.12 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66995 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:50.543 [2024-05-15 13:55:49.030846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.543 [2024-05-15 13:55:49.030885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.543 { 00:17:50.543 "params": { 00:17:50.543 "name": "Nvme$subsystem", 00:17:50.543 "trtype": "$TEST_TRANSPORT", 00:17:50.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.543 "adrfam": "ipv4", 00:17:50.543 "trsvcid": "$NVMF_PORT", 00:17:50.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.543 "hdgst": ${hdgst:-false}, 00:17:50.543 "ddgst": ${ddgst:-false} 00:17:50.543 }, 00:17:50.543 "method": "bdev_nvme_attach_controller" 00:17:50.543 } 00:17:50.543 EOF 00:17:50.543 )") 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:50.543 [2024-05-15 13:55:49.042808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.543 [2024-05-15 13:55:49.042958] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.543 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:50.544 13:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.544 "params": { 00:17:50.544 "name": "Nvme1", 00:17:50.544 "trtype": "tcp", 00:17:50.544 "traddr": "10.0.0.2", 00:17:50.544 "adrfam": "ipv4", 00:17:50.544 "trsvcid": "4420", 00:17:50.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.544 "hdgst": false, 00:17:50.544 "ddgst": false 00:17:50.544 }, 00:17:50.544 "method": "bdev_nvme_attach_controller" 00:17:50.544 }' 00:17:50.544 [2024-05-15 13:55:49.054796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.544 [2024-05-15 13:55:49.054823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.544 [2024-05-15 13:55:49.066763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.544 [2024-05-15 13:55:49.066792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.544 [2024-05-15 13:55:49.076481] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:17:50.544 [2024-05-15 13:55:49.076549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66995 ] 00:17:50.544 [2024-05-15 13:55:49.082731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.544 [2024-05-15 13:55:49.082863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.544 [2024-05-15 13:55:49.094727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.544 [2024-05-15 13:55:49.094842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.106709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.106845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.122692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.122847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.138691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.138838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.150663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.150804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.166649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.166729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.182606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.182709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.198596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.198716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.210573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.210685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.216640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.803 [2024-05-15 13:55:49.222557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.222690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.234541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.234655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.246520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.246618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.258503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.258610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.270489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.270609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.282469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.282581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.294449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.294544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.306432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.306527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.316521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.803 [2024-05-15 13:55:49.318414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.318510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.334407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.334589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.350385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.350535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.803 [2024-05-15 13:55:49.362365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.803 [2024-05-15 13:55:49.362492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.374348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.374466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.386330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.386447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.398305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.398401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.410303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.410328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.422289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.422315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.434275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.434299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.446271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.446298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.458258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.458280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 [2024-05-15 13:55:49.470284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.062 [2024-05-15 13:55:49.470319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.062 Running I/O for 5 seconds... 00:17:51.063 [2024-05-15 13:55:49.485607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.485638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.500602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.500635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.523656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.523697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.538486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.538524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.553992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.554026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.567744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.567777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.585781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.585817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.600276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.600322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.063 [2024-05-15 13:55:49.615692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.063 [2024-05-15 13:55:49.615726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.630394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.630432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.649023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.649068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.666926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.666962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.682231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.682264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.698008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.698043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.713301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.713333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.728654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.728687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.743358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.743390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.757353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.757386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.772562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.772595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.788433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.788465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.806298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.806331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.822038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.822074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.841234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.841268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.856563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.856596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.322 [2024-05-15 13:55:49.872148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.322 [2024-05-15 13:55:49.872180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.886966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.886998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.905773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.905803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.923497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.923527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.938269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.938299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.953475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.953505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.968382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.968412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:49.987893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:49.987922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.006368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.006404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.017067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.017098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.032451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.032485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.047832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.047868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.061704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.061750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.076414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.076448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.087080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.087114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.101924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.101956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.117702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.117746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.582 [2024-05-15 13:55:50.132255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.582 [2024-05-15 13:55:50.132286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.148141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.148174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.162739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.162779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.179153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.179189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.190072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.190105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.205409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.205443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.221196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.221229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.237321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.237355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.841 [2024-05-15 13:55:50.251523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.841 [2024-05-15 13:55:50.251557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.263319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.263352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.281893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.281930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.299580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.299621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.313907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.313945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.329460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.329499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.344698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.344743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.358641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.358675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.373776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.373807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.842 [2024-05-15 13:55:50.389892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.842 [2024-05-15 13:55:50.389923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.401369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.401402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.419747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.419780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.434475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.434507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.453446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.453478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.468774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.468806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.488334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.488365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.503434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.503465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.519023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.519054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.533652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.533686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.547441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.547472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.565711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.565759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.581119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.581151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.596419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.596452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.611514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.611545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.627017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.627048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.640793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.640823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.101 [2024-05-15 13:55:50.654554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.101 [2024-05-15 13:55:50.654585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.669195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.669225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.684789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.684819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.699087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.699118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.712778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.712808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.727744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.727776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.743232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.743265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.757563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.757597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.772228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.772259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.787992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.788023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.802263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.360 [2024-05-15 13:55:50.802295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.360 [2024-05-15 13:55:50.816972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.817004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.832133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.832167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.846854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.846885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.862073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.862107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.876493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.876525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.890817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.890847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.902064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.902092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.361 [2024-05-15 13:55:50.916694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.361 [2024-05-15 13:55:50.916725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:50.932573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:50.932604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:50.946078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:50.946109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:50.960566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:50.960596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:50.971212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:50.971240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:50.986120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:50.986149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.001231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.001260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.016087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.016115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.032101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.032131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.043312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.043341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.058225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.058253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.069402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.069431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.084358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.084388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.099604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.099633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.113982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.114010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.128534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.128564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.139350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.139380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.153917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.153945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.620 [2024-05-15 13:55:51.168099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.620 [2024-05-15 13:55:51.168129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.892 [2024-05-15 13:55:51.182017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.892 [2024-05-15 13:55:51.182046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.892 [2024-05-15 13:55:51.196899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.892 [2024-05-15 13:55:51.196927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.892 [2024-05-15 13:55:51.212336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.892 [2024-05-15 13:55:51.212365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.892 [2024-05-15 13:55:51.227035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.892 [2024-05-15 13:55:51.227063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.892 [2024-05-15 13:55:51.242660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.892 [2024-05-15 13:55:51.242691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.257379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.257409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.273367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.273397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.288031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.288061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.303834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.303862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.317323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.317358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.331931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.331960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.342766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.342795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.357768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.357797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.373421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.373452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.387245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.387272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.401700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.401731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.412496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.412527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.427403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.427434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.893 [2024-05-15 13:55:51.443283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.893 [2024-05-15 13:55:51.443333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.457485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.457525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.472668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.472704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.488327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.488366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.502823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.502860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.517411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.517552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.532074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.532200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.542985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.543108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.558148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.558272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.573617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.573751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.587845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.587967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.602878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.602999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.618468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.618497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.633158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.633188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.643752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.643779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.658399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.658428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.668988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.669016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.683437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.683466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.154 [2024-05-15 13:55:51.697976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.154 [2024-05-15 13:55:51.698004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.155 [2024-05-15 13:55:51.713423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.155 [2024-05-15 13:55:51.713452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.728193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.728224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.743857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.743890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.759016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.759046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.774584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.774614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.789461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.789491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.804849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.804877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.819271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.819299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.833909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.833937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.849448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.849478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.864211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.864240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.879654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.879684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.894638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.894668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.910278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.910310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.924129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.924160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.938708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.938753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.949327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.949357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.414 [2024-05-15 13:55:51.964332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.414 [2024-05-15 13:55:51.964362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:51.980043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:51.980073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:51.994196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:51.994225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.008839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.008867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.019693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.019723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.034388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.034417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.049852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.049880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.063692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.063723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.078611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.078641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.093737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.673 [2024-05-15 13:55:52.093778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.673 [2024-05-15 13:55:52.108405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.108436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.123742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.123773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.138346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.138377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.148785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.148814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.163334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.163363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.176926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.176954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.191549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.191578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.207088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.207117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.674 [2024-05-15 13:55:52.221770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.674 [2024-05-15 13:55:52.221797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.933 [2024-05-15 13:55:52.237063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.933 [2024-05-15 13:55:52.237091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.933 [2024-05-15 13:55:52.251712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.933 [2024-05-15 13:55:52.251750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.933 [2024-05-15 13:55:52.262502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.262533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.277773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.277803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.293997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.294027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.309890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.309921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.325235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.325264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.340952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.340981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.355057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.355087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.369416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.369446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.384112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.384142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.399642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.399673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.414115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.414145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.429823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.429853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.444340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.444372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.455130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.455161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.469992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.470024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.934 [2024-05-15 13:55:52.488898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.934 [2024-05-15 13:55:52.488930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.504366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.504397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.523661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.523688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.538800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.538829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.554427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.554458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.568834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.568865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.579818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.579862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.595510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.595549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.611100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.611131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.627108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.627149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.644641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.644679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.659655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.659687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.675508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.675539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.692631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.692666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.707913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.707951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.723250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.723282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.737804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.737836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.193 [2024-05-15 13:55:52.748595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.193 [2024-05-15 13:55:52.748628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.452 [2024-05-15 13:55:52.763562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.452 [2024-05-15 13:55:52.763596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.452 [2024-05-15 13:55:52.778838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.452 [2024-05-15 13:55:52.778870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.452 [2024-05-15 13:55:52.793367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.452 [2024-05-15 13:55:52.793400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.452 [2024-05-15 13:55:52.807226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.452 [2024-05-15 13:55:52.807257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.452 [2024-05-15 13:55:52.824501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.452 [2024-05-15 13:55:52.824534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.839269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.839308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.850108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.850138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.864909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.864939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.880333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.880376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.895183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.895219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.911019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.911055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.925450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.925483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.936736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.936783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.951937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.951989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.967203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.967249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.981812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.981842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:52.997385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:52.997416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.453 [2024-05-15 13:55:53.011236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.453 [2024-05-15 13:55:53.011269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.026046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.026077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.041332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.041361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.055640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.055672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.070558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.070589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.085947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.085978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.099664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.099695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.114556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.114586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.129858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.129888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.144856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.144886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.160648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.160681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.171808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.171840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.187188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.187219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.202418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.202449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.216697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.216728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.227607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.227636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.242142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.242172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.712 [2024-05-15 13:55:53.257220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.712 [2024-05-15 13:55:53.257249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.271904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.271932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.287417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.287446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.301142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.301171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.315951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.315979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.331766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.331794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.346071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.346100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.361214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.361244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.376929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.376957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.387899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.387927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.403061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.403091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.418645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.418676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.432380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.432410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.447409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.447439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.462575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.462605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.477352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.477382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.492761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.492790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.507295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.507326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.971 [2024-05-15 13:55:53.518279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.971 [2024-05-15 13:55:53.518311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.533019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.533052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.548814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.548846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.563374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.563407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.578145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.578175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.593687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.593718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.608308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.608338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.619116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.619145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.634206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.634237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.658407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.658441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.673202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.673233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.688729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.688768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.703375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.703405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.714171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.714199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.728904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.728932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.744138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.744171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.759029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.759060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.774346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.774376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.231 [2024-05-15 13:55:53.789283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.231 [2024-05-15 13:55:53.789315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.804680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.804715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.818998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.819030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.833342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.833372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.844225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.844255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.858997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.859027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.869419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.869449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.883592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.883622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.898175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.898204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.913541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.913578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.928224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.928256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.944264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.944304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.960581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.489 [2024-05-15 13:55:53.960621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.489 [2024-05-15 13:55:53.971461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:53.971507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-05-15 13:55:53.986443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:53.986485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-05-15 13:55:54.001435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:54.001475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-05-15 13:55:54.016990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:54.017039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-05-15 13:55:54.031853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:54.031893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-05-15 13:55:54.047823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-05-15 13:55:54.047860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.058616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.058648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.073683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.073715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.089053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.089086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.103773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.103803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.119296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.119329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.134212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.134245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.149847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.149880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.165000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.165034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.180826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.180855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.191530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.191560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.206421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.206453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.221978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.222019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.236812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.236847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.252543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.252578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.270480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.270522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.285155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.285190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.748 [2024-05-15 13:55:54.300411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.748 [2024-05-15 13:55:54.300445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.315045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.315076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.334318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.334351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.348964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.348993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.363473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.363503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.379237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.379268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.394265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.394297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.413280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.413313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.428031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.428060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.438666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.438697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.453889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.453917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.469044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.469076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 00:17:56.007 Latency(us) 00:17:56.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.007 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:56.007 Nvme1n1 : 5.01 16157.25 126.23 0.00 0.00 7913.93 2026.62 14107.35 00:17:56.007 =================================================================================================================== 00:17:56.007 Total : 16157.25 126.23 0.00 0.00 7913.93 2026.62 14107.35 00:17:56.007 [2024-05-15 13:55:54.478352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.478480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.490315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.490435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.502316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.502472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.514292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.514431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.526280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.526307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.538248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.538272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.550229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.550254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.007 [2024-05-15 13:55:54.562211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.007 [2024-05-15 13:55:54.562236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.574196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.574219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.586172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.586191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.602152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.602171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.614142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.614169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.626116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.626136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.638100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.638120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.650082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.650101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.662072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.662096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.674054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.674079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 [2024-05-15 13:55:54.686036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.267 [2024-05-15 13:55:54.686058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.267 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66995) - No such process 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66995 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.267 delay0 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.267 13:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:56.526 [2024-05-15 13:55:54.898426] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:04.666 Initializing NVMe Controllers 00:18:04.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.666 Initialization complete. Launching workers. 00:18:04.666 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 32769 00:18:04.666 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32889, failed to submit 124 00:18:04.666 success 32818, unsuccess 71, failed 0 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.666 13:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.666 rmmod nvme_tcp 00:18:04.666 rmmod nvme_fabrics 00:18:04.666 rmmod nvme_keyring 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66851 ']' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66851 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 66851 ']' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 66851 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66851 00:18:04.666 killing process with pid 66851 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66851' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 66851 00:18:04.666 [2024-05-15 13:56:02.067064] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 66851 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:04.666 ************************************ 00:18:04.666 END TEST nvmf_zcopy 00:18:04.666 ************************************ 00:18:04.666 00:18:04.666 real 0m25.641s 00:18:04.666 user 0m41.211s 00:18:04.666 sys 0m8.432s 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.666 13:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:04.666 13:56:02 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:04.666 13:56:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:04.666 13:56:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.666 13:56:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:04.666 ************************************ 00:18:04.666 START TEST nvmf_nmic 00:18:04.666 ************************************ 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:04.666 * Looking for test storage... 00:18:04.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:04.666 13:56:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:04.667 Cannot find device "nvmf_tgt_br" 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.667 Cannot find device "nvmf_tgt_br2" 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:04.667 Cannot find device "nvmf_tgt_br" 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:04.667 Cannot find device "nvmf_tgt_br2" 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:04.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:04.667 00:18:04.667 --- 10.0.0.2 ping statistics --- 00:18:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.667 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:04.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:04.667 00:18:04.667 --- 10.0.0.3 ping statistics --- 00:18:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.667 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:04.667 00:18:04.667 --- 10.0.0.1 ping statistics --- 00:18:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.667 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.667 13:56:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67324 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67324 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 67324 ']' 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:04.667 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:04.667 [2024-05-15 13:56:03.072535] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:18:04.667 [2024-05-15 13:56:03.072606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.667 [2024-05-15 13:56:03.214572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.970 [2024-05-15 13:56:03.319508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.970 [2024-05-15 13:56:03.319716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.970 [2024-05-15 13:56:03.319909] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.970 [2024-05-15 13:56:03.319959] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.970 [2024-05-15 13:56:03.319987] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.970 [2024-05-15 13:56:03.320152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.970 [2024-05-15 13:56:03.320313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.970 [2024-05-15 13:56:03.321241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.970 [2024-05-15 13:56:03.321242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.600 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.600 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:05.600 13:56:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.600 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.600 13:56:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 [2024-05-15 13:56:04.009018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 Malloc0 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 [2024-05-15 13:56:04.078957] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:05.600 [2024-05-15 13:56:04.079188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.600 test case1: single bdev can't be used in multiple subsystems 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:05.600 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.601 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 [2024-05-15 13:56:04.114958] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:05.885 [2024-05-15 13:56:04.114989] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:05.885 [2024-05-15 13:56:04.114999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.885 request: 00:18:05.885 { 00:18:05.885 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:05.885 "namespace": { 00:18:05.885 "bdev_name": "Malloc0", 00:18:05.885 "no_auto_visible": false 00:18:05.885 }, 00:18:05.885 "method": "nvmf_subsystem_add_ns", 00:18:05.885 "req_id": 1 00:18:05.885 } 00:18:05.885 Got JSON-RPC error response 00:18:05.885 response: 00:18:05.885 { 00:18:05.885 "code": -32602, 00:18:05.885 "message": "Invalid parameters" 00:18:05.885 } 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:05.885 Adding namespace failed - expected result. 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:05.885 test case2: host connect to nvmf target in multiple paths 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 [2024-05-15 13:56:04.135081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:05.885 13:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:08.475 13:56:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:08.475 [global] 00:18:08.475 thread=1 00:18:08.475 invalidate=1 00:18:08.475 rw=write 00:18:08.475 time_based=1 00:18:08.475 runtime=1 00:18:08.475 ioengine=libaio 00:18:08.475 direct=1 00:18:08.475 bs=4096 00:18:08.475 iodepth=1 00:18:08.475 norandommap=0 00:18:08.475 numjobs=1 00:18:08.475 00:18:08.475 verify_dump=1 00:18:08.475 verify_backlog=512 00:18:08.475 verify_state_save=0 00:18:08.475 do_verify=1 00:18:08.475 verify=crc32c-intel 00:18:08.475 [job0] 00:18:08.475 filename=/dev/nvme0n1 00:18:08.475 Could not set queue depth (nvme0n1) 00:18:08.475 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.475 fio-3.35 00:18:08.475 Starting 1 thread 00:18:09.411 00:18:09.411 job0: (groupid=0, jobs=1): err= 0: pid=67417: Wed May 15 13:56:07 2024 00:18:09.411 read: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1001msec) 00:18:09.411 slat (usec): min=7, max=176, avg= 9.36, stdev= 4.42 00:18:09.411 clat (usec): min=90, max=321, avg=137.49, stdev=16.07 00:18:09.411 lat (usec): min=112, max=329, avg=146.85, stdev=16.64 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:18:09.411 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:18:09.411 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:18:09.411 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 208], 99.95th=[ 269], 00:18:09.411 | 99.99th=[ 322] 00:18:09.411 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:18:09.411 slat (usec): min=11, max=131, avg=14.29, stdev= 5.70 00:18:09.411 clat (usec): min=58, max=316, avg=82.36, stdev=13.64 00:18:09.411 lat (usec): min=73, max=345, avg=96.65, stdev=15.70 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 64], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:18:09.411 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 84], 00:18:09.411 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 101], 00:18:09.411 | 99.00th=[ 118], 99.50th=[ 130], 99.90th=[ 237], 99.95th=[ 273], 00:18:09.411 | 99.99th=[ 318] 00:18:09.411 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:18:09.411 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:18:09.411 lat (usec) : 100=47.41%, 250=52.50%, 500=0.09% 00:18:09.411 cpu : usr=2.10%, sys=7.50%, ctx=8154, majf=0, minf=2 00:18:09.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 issued rwts: total=4058,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.411 00:18:09.411 Run status group 0 (all jobs): 00:18:09.411 READ: bw=15.8MiB/s (16.6MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=15.9MiB (16.6MB), run=1001-1001msec 00:18:09.411 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:18:09.411 00:18:09.411 Disk stats (read/write): 00:18:09.411 nvme0n1: ios=3634/3813, merge=0/0, ticks=521/340, in_queue=861, util=91.28% 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.411 rmmod nvme_tcp 00:18:09.411 rmmod nvme_fabrics 00:18:09.411 rmmod nvme_keyring 00:18:09.411 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67324 ']' 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67324 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 67324 ']' 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 67324 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:09.671 13:56:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67324 00:18:09.671 killing process with pid 67324 00:18:09.671 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:09.671 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:09.671 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67324' 00:18:09.671 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 67324 00:18:09.671 [2024-05-15 13:56:08.016517] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:09.671 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 67324 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.930 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.931 13:56:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:09.931 00:18:09.931 real 0m5.942s 00:18:09.931 user 0m17.968s 00:18:09.931 sys 0m2.759s 00:18:09.931 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:09.931 13:56:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:09.931 ************************************ 00:18:09.931 END TEST nvmf_nmic 00:18:09.931 ************************************ 00:18:09.931 13:56:08 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:09.931 13:56:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:09.931 13:56:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.931 13:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.931 ************************************ 00:18:09.931 START TEST nvmf_fio_target 00:18:09.931 ************************************ 00:18:09.931 13:56:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:10.190 * Looking for test storage... 00:18:10.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.190 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:10.191 Cannot find device "nvmf_tgt_br" 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.191 Cannot find device "nvmf_tgt_br2" 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:10.191 Cannot find device "nvmf_tgt_br" 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:10.191 Cannot find device "nvmf_tgt_br2" 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:18:10.191 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:10.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:10.451 00:18:10.451 --- 10.0.0.2 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:10.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:10.451 00:18:10.451 --- 10.0.0.3 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:10.451 13:56:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:10.451 00:18:10.451 --- 10.0.0.1 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.451 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67600 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67600 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 67600 ']' 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.710 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.710 [2024-05-15 13:56:09.087044] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:18:10.710 [2024-05-15 13:56:09.087116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.710 [2024-05-15 13:56:09.228627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.969 [2024-05-15 13:56:09.333513] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.969 [2024-05-15 13:56:09.333821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.969 [2024-05-15 13:56:09.333940] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.969 [2024-05-15 13:56:09.334033] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.969 [2024-05-15 13:56:09.334061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.969 [2024-05-15 13:56:09.334262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.969 [2024-05-15 13:56:09.334390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.969 [2024-05-15 13:56:09.334431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.969 [2024-05-15 13:56:09.334486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.538 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.538 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:11.538 13:56:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.538 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.538 13:56:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.538 13:56:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.538 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.839 [2024-05-15 13:56:10.214154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.839 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.116 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:12.116 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.116 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:12.374 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.374 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:12.374 13:56:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.632 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:12.632 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:12.891 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.150 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:13.150 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.150 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:13.150 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.410 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:13.410 13:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:13.669 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:13.929 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:13.929 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.200 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:14.200 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:14.200 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.459 [2024-05-15 13:56:12.929540] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:14.459 [2024-05-15 13:56:12.930177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.459 13:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:14.718 13:56:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:14.979 13:56:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:17.512 13:56:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:17.512 [global] 00:18:17.512 thread=1 00:18:17.512 invalidate=1 00:18:17.512 rw=write 00:18:17.512 time_based=1 00:18:17.512 runtime=1 00:18:17.512 ioengine=libaio 00:18:17.512 direct=1 00:18:17.512 bs=4096 00:18:17.512 iodepth=1 00:18:17.512 norandommap=0 00:18:17.512 numjobs=1 00:18:17.512 00:18:17.512 verify_dump=1 00:18:17.512 verify_backlog=512 00:18:17.512 verify_state_save=0 00:18:17.512 do_verify=1 00:18:17.512 verify=crc32c-intel 00:18:17.512 [job0] 00:18:17.512 filename=/dev/nvme0n1 00:18:17.512 [job1] 00:18:17.512 filename=/dev/nvme0n2 00:18:17.512 [job2] 00:18:17.512 filename=/dev/nvme0n3 00:18:17.512 [job3] 00:18:17.512 filename=/dev/nvme0n4 00:18:17.512 Could not set queue depth (nvme0n1) 00:18:17.512 Could not set queue depth (nvme0n2) 00:18:17.512 Could not set queue depth (nvme0n3) 00:18:17.512 Could not set queue depth (nvme0n4) 00:18:17.512 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.512 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.512 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.512 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.512 fio-3.35 00:18:17.512 Starting 4 threads 00:18:18.447 00:18:18.447 job0: (groupid=0, jobs=1): err= 0: pid=67781: Wed May 15 13:56:16 2024 00:18:18.447 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:18.447 slat (usec): min=7, max=150, avg= 8.55, stdev= 3.46 00:18:18.447 clat (usec): min=153, max=706, avg=265.09, stdev=44.20 00:18:18.447 lat (usec): min=161, max=714, avg=273.64, stdev=44.33 00:18:18.447 clat percentiles (usec): 00:18:18.447 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:18:18.447 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:18:18.447 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 330], 95.00th=[ 343], 00:18:18.447 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 408], 99.95th=[ 416], 00:18:18.447 | 99.99th=[ 709] 00:18:18.447 write: IOPS=2266, BW=9067KiB/s (9285kB/s)(9076KiB/1001msec); 0 zone resets 00:18:18.447 slat (usec): min=11, max=141, avg=18.06, stdev=12.71 00:18:18.447 clat (usec): min=82, max=1502, avg=173.72, stdev=55.92 00:18:18.447 lat (usec): min=94, max=1514, avg=191.78, stdev=63.34 00:18:18.447 clat percentiles (usec): 00:18:18.447 | 1.00th=[ 93], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 121], 00:18:18.447 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 182], 00:18:18.447 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 237], 95.00th=[ 277], 00:18:18.447 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 408], 99.95th=[ 429], 00:18:18.447 | 99.99th=[ 1500] 00:18:18.447 bw ( KiB/s): min= 8192, max= 8192, per=17.09%, avg=8192.00, stdev= 0.00, samples=1 00:18:18.447 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:18.447 lat (usec) : 100=1.71%, 250=70.63%, 500=27.61%, 750=0.02% 00:18:18.447 lat (msec) : 2=0.02% 00:18:18.447 cpu : usr=1.40%, sys=4.30%, ctx=4317, majf=0, minf=15 00:18:18.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.447 issued rwts: total=2048,2269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.447 job1: (groupid=0, jobs=1): err= 0: pid=67782: Wed May 15 13:56:16 2024 00:18:18.447 read: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec) 00:18:18.447 slat (nsec): min=7310, max=29033, avg=8168.80, stdev=1492.84 00:18:18.447 clat (usec): min=116, max=684, avg=149.91, stdev=17.45 00:18:18.447 lat (usec): min=124, max=695, avg=158.07, stdev=17.66 00:18:18.447 clat percentiles (usec): 00:18:18.447 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:18:18.447 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:18:18.447 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:18:18.447 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 253], 99.95th=[ 388], 00:18:18.447 | 99.99th=[ 685] 00:18:18.447 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:18:18.447 slat (nsec): min=9420, max=93654, avg=13798.42, stdev=5485.58 00:18:18.447 clat (usec): min=78, max=6043, avg=112.00, stdev=148.37 00:18:18.447 lat (usec): min=90, max=6054, avg=125.80, stdev=148.57 00:18:18.447 clat percentiles (usec): 00:18:18.447 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 96], 00:18:18.447 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 108], 00:18:18.447 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 122], 95.00th=[ 129], 00:18:18.447 | 99.00th=[ 167], 99.50th=[ 231], 99.90th=[ 2442], 99.95th=[ 4490], 00:18:18.447 | 99.99th=[ 6063] 00:18:18.447 bw ( KiB/s): min=16384, max=16384, per=34.18%, avg=16384.00, stdev= 0.00, samples=1 00:18:18.448 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:18:18.448 lat (usec) : 100=18.06%, 250=81.67%, 500=0.14%, 750=0.03% 00:18:18.448 lat (msec) : 2=0.04%, 4=0.03%, 10=0.03% 00:18:18.448 cpu : usr=1.20%, sys=6.70%, ctx=7017, majf=0, minf=7 00:18:18.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 issued rwts: total=3433,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.448 job2: (groupid=0, jobs=1): err= 0: pid=67783: Wed May 15 13:56:16 2024 00:18:18.448 read: IOPS=3484, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec) 00:18:18.448 slat (nsec): min=7378, max=44799, avg=8809.24, stdev=2123.01 00:18:18.448 clat (usec): min=123, max=1942, avg=149.75, stdev=32.65 00:18:18.448 lat (usec): min=131, max=1950, avg=158.56, stdev=32.82 00:18:18.448 clat percentiles (usec): 00:18:18.448 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:18:18.448 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:18:18.448 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:18:18.448 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 208], 99.95th=[ 245], 00:18:18.448 | 99.99th=[ 1942] 00:18:18.448 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:18:18.448 slat (usec): min=9, max=109, avg=14.29, stdev= 5.58 00:18:18.448 clat (usec): min=73, max=654, avg=108.52, stdev=15.82 00:18:18.448 lat (usec): min=86, max=668, avg=122.81, stdev=17.69 00:18:18.448 clat percentiles (usec): 00:18:18.448 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 99], 00:18:18.448 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:18:18.448 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 125], 95.00th=[ 133], 00:18:18.448 | 99.00th=[ 145], 99.50th=[ 153], 99.90th=[ 182], 99.95th=[ 400], 00:18:18.448 | 99.99th=[ 652] 00:18:18.448 bw ( KiB/s): min=16384, max=16384, per=34.18%, avg=16384.00, stdev= 0.00, samples=1 00:18:18.448 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:18:18.448 lat (usec) : 100=12.54%, 250=87.42%, 500=0.01%, 750=0.01% 00:18:18.448 lat (msec) : 2=0.01% 00:18:18.448 cpu : usr=1.20%, sys=7.20%, ctx=7073, majf=0, minf=5 00:18:18.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 issued rwts: total=3488,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.448 job3: (groupid=0, jobs=1): err= 0: pid=67784: Wed May 15 13:56:16 2024 00:18:18.448 read: IOPS=2056, BW=8228KiB/s (8425kB/s)(8236KiB/1001msec) 00:18:18.448 slat (nsec): min=7058, max=35136, avg=8039.41, stdev=1796.56 00:18:18.448 clat (usec): min=128, max=1505, avg=258.40, stdev=47.65 00:18:18.448 lat (usec): min=135, max=1512, avg=266.44, stdev=47.85 00:18:18.448 clat percentiles (usec): 00:18:18.448 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:18:18.448 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:18:18.448 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 330], 00:18:18.448 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 420], 99.95th=[ 469], 00:18:18.448 | 99.99th=[ 1500] 00:18:18.448 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:18:18.448 slat (nsec): min=11183, max=99890, avg=13681.59, stdev=5671.94 00:18:18.448 clat (usec): min=92, max=638, avg=161.02, stdev=34.23 00:18:18.448 lat (usec): min=105, max=650, avg=174.70, stdev=34.63 00:18:18.448 clat percentiles (usec): 00:18:18.448 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 122], 00:18:18.448 | 30.00th=[ 137], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 176], 00:18:18.448 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:18:18.448 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 258], 99.95th=[ 326], 00:18:18.448 | 99.99th=[ 635] 00:18:18.448 bw ( KiB/s): min=10496, max=10496, per=21.89%, avg=10496.00, stdev= 0.00, samples=1 00:18:18.448 iops : min= 2624, max= 2624, avg=2624.00, stdev= 0.00, samples=1 00:18:18.448 lat (usec) : 100=0.48%, 250=79.28%, 500=20.20%, 750=0.02% 00:18:18.448 lat (msec) : 2=0.02% 00:18:18.448 cpu : usr=1.20%, sys=4.10%, ctx=4620, majf=0, minf=8 00:18:18.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.448 issued rwts: total=2059,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.448 00:18:18.448 Run status group 0 (all jobs): 00:18:18.448 READ: bw=43.0MiB/s (45.1MB/s), 8184KiB/s-13.6MiB/s (8380kB/s-14.3MB/s), io=43.1MiB (45.2MB), run=1001-1001msec 00:18:18.448 WRITE: bw=46.8MiB/s (49.1MB/s), 9067KiB/s-14.0MiB/s (9285kB/s-14.7MB/s), io=46.9MiB (49.1MB), run=1001-1001msec 00:18:18.448 00:18:18.448 Disk stats (read/write): 00:18:18.448 nvme0n1: ios=1781/2048, merge=0/0, ticks=486/368, in_queue=854, util=88.47% 00:18:18.448 nvme0n2: ios=3040/3072, merge=0/0, ticks=471/353, in_queue=824, util=88.46% 00:18:18.448 nvme0n3: ios=3072/3072, merge=0/0, ticks=494/355, in_queue=849, util=89.91% 00:18:18.448 nvme0n4: ios=1884/2048, merge=0/0, ticks=500/335, in_queue=835, util=89.86% 00:18:18.448 13:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:18.707 [global] 00:18:18.707 thread=1 00:18:18.707 invalidate=1 00:18:18.707 rw=randwrite 00:18:18.707 time_based=1 00:18:18.707 runtime=1 00:18:18.707 ioengine=libaio 00:18:18.707 direct=1 00:18:18.707 bs=4096 00:18:18.707 iodepth=1 00:18:18.707 norandommap=0 00:18:18.707 numjobs=1 00:18:18.707 00:18:18.707 verify_dump=1 00:18:18.707 verify_backlog=512 00:18:18.707 verify_state_save=0 00:18:18.707 do_verify=1 00:18:18.707 verify=crc32c-intel 00:18:18.707 [job0] 00:18:18.707 filename=/dev/nvme0n1 00:18:18.707 [job1] 00:18:18.707 filename=/dev/nvme0n2 00:18:18.707 [job2] 00:18:18.707 filename=/dev/nvme0n3 00:18:18.707 [job3] 00:18:18.707 filename=/dev/nvme0n4 00:18:18.707 Could not set queue depth (nvme0n1) 00:18:18.707 Could not set queue depth (nvme0n2) 00:18:18.707 Could not set queue depth (nvme0n3) 00:18:18.707 Could not set queue depth (nvme0n4) 00:18:18.707 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.707 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.707 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.707 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.707 fio-3.35 00:18:18.707 Starting 4 threads 00:18:20.082 00:18:20.082 job0: (groupid=0, jobs=1): err= 0: pid=67837: Wed May 15 13:56:18 2024 00:18:20.082 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:18:20.082 slat (nsec): min=7181, max=43562, avg=8370.88, stdev=1830.53 00:18:20.082 clat (usec): min=117, max=1764, avg=144.18, stdev=30.74 00:18:20.082 lat (usec): min=125, max=1773, avg=152.55, stdev=30.88 00:18:20.082 clat percentiles (usec): 00:18:20.082 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:18:20.082 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:18:20.082 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:18:20.082 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 383], 99.95th=[ 437], 00:18:20.082 | 99.99th=[ 1762] 00:18:20.082 write: IOPS=3845, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:18:20.082 slat (usec): min=9, max=140, avg=13.85, stdev= 6.00 00:18:20.082 clat (usec): min=72, max=225, avg=102.06, stdev=11.48 00:18:20.082 lat (usec): min=84, max=365, avg=115.91, stdev=14.38 00:18:20.082 clat percentiles (usec): 00:18:20.082 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 93], 00:18:20.082 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:18:20.082 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 124], 00:18:20.082 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 163], 99.95th=[ 172], 00:18:20.082 | 99.99th=[ 227] 00:18:20.082 bw ( KiB/s): min=16351, max=16351, per=34.39%, avg=16351.00, stdev= 0.00, samples=1 00:18:20.082 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:18:20.082 lat (usec) : 100=26.42%, 250=73.51%, 500=0.05% 00:18:20.082 lat (msec) : 2=0.01% 00:18:20.082 cpu : usr=2.10%, sys=6.30%, ctx=7433, majf=0, minf=5 00:18:20.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.082 issued rwts: total=3584,3849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.082 job1: (groupid=0, jobs=1): err= 0: pid=67838: Wed May 15 13:56:18 2024 00:18:20.082 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:18:20.082 slat (nsec): min=6998, max=27121, avg=8053.30, stdev=1643.67 00:18:20.082 clat (usec): min=114, max=1896, avg=142.90, stdev=33.09 00:18:20.082 lat (usec): min=122, max=1904, avg=150.95, stdev=33.17 00:18:20.082 clat percentiles (usec): 00:18:20.082 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:18:20.082 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:18:20.082 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:18:20.082 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 293], 99.95th=[ 644], 00:18:20.082 | 99.99th=[ 1893] 00:18:20.082 write: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:18:20.082 slat (usec): min=11, max=138, avg=13.83, stdev= 5.87 00:18:20.082 clat (usec): min=79, max=226, avg=102.59, stdev=11.42 00:18:20.082 lat (usec): min=91, max=363, avg=116.43, stdev=14.15 00:18:20.082 clat percentiles (usec): 00:18:20.082 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:18:20.082 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 103], 00:18:20.082 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 125], 00:18:20.082 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 161], 99.95th=[ 227], 00:18:20.082 | 99.99th=[ 227] 00:18:20.082 bw ( KiB/s): min=16351, max=16351, per=34.39%, avg=16351.00, stdev= 0.00, samples=1 00:18:20.082 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:18:20.082 lat (usec) : 100=25.63%, 250=74.32%, 500=0.03%, 750=0.01% 00:18:20.082 lat (msec) : 2=0.01% 00:18:20.082 cpu : usr=1.20%, sys=7.30%, ctx=7465, majf=0, minf=13 00:18:20.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.082 issued rwts: total=3584,3881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.082 job2: (groupid=0, jobs=1): err= 0: pid=67840: Wed May 15 13:56:18 2024 00:18:20.082 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:20.082 slat (nsec): min=5850, max=54451, avg=7733.98, stdev=2764.02 00:18:20.082 clat (usec): min=197, max=575, avg=255.31, stdev=24.97 00:18:20.083 lat (usec): min=204, max=584, avg=263.04, stdev=25.67 00:18:20.083 clat percentiles (usec): 00:18:20.083 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:18:20.083 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:18:20.083 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:18:20.083 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 498], 99.95th=[ 562], 00:18:20.083 | 99.99th=[ 578] 00:18:20.083 write: IOPS=2079, BW=8320KiB/s (8519kB/s)(8328KiB/1001msec); 0 zone resets 00:18:20.083 slat (usec): min=7, max=122, avg=13.40, stdev= 7.41 00:18:20.083 clat (usec): min=104, max=544, avg=206.45, stdev=21.24 00:18:20.083 lat (usec): min=145, max=555, avg=219.84, stdev=23.77 00:18:20.083 clat percentiles (usec): 00:18:20.083 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:18:20.083 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:18:20.083 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 241], 00:18:20.083 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 310], 00:18:20.083 | 99.99th=[ 545] 00:18:20.083 bw ( KiB/s): min= 8175, max= 8175, per=17.20%, avg=8175.00, stdev= 0.00, samples=1 00:18:20.083 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:18:20.083 lat (usec) : 250=70.46%, 500=29.47%, 750=0.07% 00:18:20.083 cpu : usr=1.00%, sys=3.70%, ctx=4130, majf=0, minf=10 00:18:20.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.083 issued rwts: total=2048,2082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.083 job3: (groupid=0, jobs=1): err= 0: pid=67842: Wed May 15 13:56:18 2024 00:18:20.083 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:20.083 slat (nsec): min=5978, max=79810, avg=8570.04, stdev=3417.94 00:18:20.083 clat (usec): min=196, max=537, avg=254.36, stdev=24.67 00:18:20.083 lat (usec): min=204, max=545, avg=262.93, stdev=25.41 00:18:20.083 clat percentiles (usec): 00:18:20.083 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 235], 00:18:20.083 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:18:20.083 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 297], 00:18:20.083 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 449], 99.95th=[ 529], 00:18:20.083 | 99.99th=[ 537] 00:18:20.083 write: IOPS=2082, BW=8332KiB/s (8532kB/s)(8340KiB/1001msec); 0 zone resets 00:18:20.083 slat (usec): min=7, max=104, avg=15.04, stdev= 7.26 00:18:20.083 clat (usec): min=72, max=418, avg=204.31, stdev=20.71 00:18:20.083 lat (usec): min=84, max=431, avg=219.35, stdev=23.28 00:18:20.083 clat percentiles (usec): 00:18:20.083 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:18:20.083 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:18:20.083 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:18:20.083 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 302], 99.95th=[ 334], 00:18:20.083 | 99.99th=[ 420] 00:18:20.083 bw ( KiB/s): min= 8192, max= 8192, per=17.23%, avg=8192.00, stdev= 0.00, samples=1 00:18:20.083 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:20.083 lat (usec) : 100=0.07%, 250=71.96%, 500=27.92%, 750=0.05% 00:18:20.083 cpu : usr=1.20%, sys=4.30%, ctx=4134, majf=0, minf=17 00:18:20.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.083 issued rwts: total=2048,2085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.083 00:18:20.083 Run status group 0 (all jobs): 00:18:20.083 READ: bw=44.0MiB/s (46.1MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:18:20.083 WRITE: bw=46.4MiB/s (48.7MB/s), 8320KiB/s-15.1MiB/s (8519kB/s-15.9MB/s), io=46.5MiB (48.7MB), run=1001-1001msec 00:18:20.083 00:18:20.083 Disk stats (read/write): 00:18:20.083 nvme0n1: ios=3122/3302, merge=0/0, ticks=491/366, in_queue=857, util=89.27% 00:18:20.083 nvme0n2: ios=3121/3355, merge=0/0, ticks=466/373, in_queue=839, util=88.47% 00:18:20.083 nvme0n3: ios=1584/2048, merge=0/0, ticks=422/406, in_queue=828, util=90.39% 00:18:20.083 nvme0n4: ios=1551/2048, merge=0/0, ticks=391/431, in_queue=822, util=89.82% 00:18:20.083 13:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:20.083 [global] 00:18:20.083 thread=1 00:18:20.083 invalidate=1 00:18:20.083 rw=write 00:18:20.083 time_based=1 00:18:20.083 runtime=1 00:18:20.083 ioengine=libaio 00:18:20.083 direct=1 00:18:20.083 bs=4096 00:18:20.083 iodepth=128 00:18:20.083 norandommap=0 00:18:20.083 numjobs=1 00:18:20.083 00:18:20.083 verify_dump=1 00:18:20.083 verify_backlog=512 00:18:20.083 verify_state_save=0 00:18:20.083 do_verify=1 00:18:20.083 verify=crc32c-intel 00:18:20.083 [job0] 00:18:20.083 filename=/dev/nvme0n1 00:18:20.083 [job1] 00:18:20.083 filename=/dev/nvme0n2 00:18:20.083 [job2] 00:18:20.083 filename=/dev/nvme0n3 00:18:20.083 [job3] 00:18:20.083 filename=/dev/nvme0n4 00:18:20.083 Could not set queue depth (nvme0n1) 00:18:20.083 Could not set queue depth (nvme0n2) 00:18:20.083 Could not set queue depth (nvme0n3) 00:18:20.083 Could not set queue depth (nvme0n4) 00:18:20.083 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.083 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.083 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.083 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.083 fio-3.35 00:18:20.083 Starting 4 threads 00:18:21.493 00:18:21.493 job0: (groupid=0, jobs=1): err= 0: pid=67899: Wed May 15 13:56:19 2024 00:18:21.493 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:18:21.493 slat (usec): min=6, max=11180, avg=127.47, stdev=592.65 00:18:21.493 clat (usec): min=4928, max=62927, avg=16111.37, stdev=10900.48 00:18:21.493 lat (usec): min=4936, max=62935, avg=16238.84, stdev=10982.30 00:18:21.493 clat percentiles (usec): 00:18:21.493 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10290], 00:18:21.493 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:18:21.493 | 70.00th=[11469], 80.00th=[23987], 90.00th=[34866], 95.00th=[40109], 00:18:21.493 | 99.00th=[53740], 99.50th=[57410], 99.90th=[63177], 99.95th=[63177], 00:18:21.493 | 99.99th=[63177] 00:18:21.493 write: IOPS=4098, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:18:21.493 slat (usec): min=8, max=10795, avg=106.88, stdev=498.11 00:18:21.493 clat (usec): min=1437, max=63162, avg=14787.00, stdev=10816.87 00:18:21.493 lat (usec): min=4630, max=63176, avg=14893.88, stdev=10880.54 00:18:21.493 clat percentiles (usec): 00:18:21.493 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:18:21.493 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:18:21.493 | 70.00th=[11076], 80.00th=[20055], 90.00th=[26608], 95.00th=[40633], 00:18:21.493 | 99.00th=[59507], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:18:21.493 | 99.99th=[63177] 00:18:21.493 bw ( KiB/s): min= 8192, max=24625, per=25.57%, avg=16408.50, stdev=11619.89, samples=2 00:18:21.493 iops : min= 2048, max= 6156, avg=4102.00, stdev=2904.79, samples=2 00:18:21.493 lat (msec) : 2=0.01%, 10=18.96%, 20=59.09%, 50=18.62%, 100=3.31% 00:18:21.493 cpu : usr=3.99%, sys=14.36%, ctx=478, majf=0, minf=10 00:18:21.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:21.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.494 issued rwts: total=4096,4115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.494 job1: (groupid=0, jobs=1): err= 0: pid=67901: Wed May 15 13:56:19 2024 00:18:21.494 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:18:21.494 slat (usec): min=4, max=14756, avg=234.77, stdev=1013.11 00:18:21.494 clat (usec): min=16605, max=64568, avg=29115.93, stdev=10802.40 00:18:21.494 lat (usec): min=16616, max=67742, avg=29350.69, stdev=10891.08 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[17171], 5.00th=[19268], 10.00th=[21365], 20.00th=[21627], 00:18:21.494 | 30.00th=[22152], 40.00th=[22414], 50.00th=[23987], 60.00th=[25822], 00:18:21.494 | 70.00th=[31327], 80.00th=[36439], 90.00th=[43779], 95.00th=[54789], 00:18:21.494 | 99.00th=[62129], 99.50th=[63177], 99.90th=[63701], 99.95th=[64750], 00:18:21.494 | 99.99th=[64750] 00:18:21.494 write: IOPS=2269, BW=9079KiB/s (9296kB/s)(9124KiB/1005msec); 0 zone resets 00:18:21.494 slat (usec): min=7, max=10894, avg=224.40, stdev=905.27 00:18:21.494 clat (usec): min=1277, max=65699, avg=29376.00, stdev=13441.85 00:18:21.494 lat (usec): min=4940, max=65728, avg=29600.40, stdev=13526.48 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[ 5276], 5.00th=[16712], 10.00th=[21103], 20.00th=[21627], 00:18:21.494 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22676], 60.00th=[23725], 00:18:21.494 | 70.00th=[28181], 80.00th=[42206], 90.00th=[53740], 95.00th=[57934], 00:18:21.494 | 99.00th=[62653], 99.50th=[62653], 99.90th=[63177], 99.95th=[63701], 00:18:21.494 | 99.99th=[65799] 00:18:21.494 bw ( KiB/s): min= 6968, max=10256, per=13.42%, avg=8612.00, stdev=2324.97, samples=2 00:18:21.494 iops : min= 1742, max= 2564, avg=2153.00, stdev=581.24, samples=2 00:18:21.494 lat (msec) : 2=0.02%, 10=0.97%, 20=6.42%, 50=81.70%, 100=10.88% 00:18:21.494 cpu : usr=0.70%, sys=4.08%, ctx=525, majf=0, minf=17 00:18:21.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:21.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.494 issued rwts: total=2048,2281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.494 job2: (groupid=0, jobs=1): err= 0: pid=67905: Wed May 15 13:56:19 2024 00:18:21.494 read: IOPS=5958, BW=23.3MiB/s (24.4MB/s)(23.3MiB/1002msec) 00:18:21.494 slat (usec): min=17, max=2417, avg=76.48, stdev=308.57 00:18:21.494 clat (usec): min=980, max=13168, avg=10525.03, stdev=1458.58 00:18:21.494 lat (usec): min=1000, max=13740, avg=10601.50, stdev=1436.74 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[ 5538], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9503], 00:18:21.494 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[11338], 00:18:21.494 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12125], 95.00th=[12387], 00:18:21.494 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13042], 99.95th=[13173], 00:18:21.494 | 99.99th=[13173] 00:18:21.494 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:18:21.494 slat (usec): min=21, max=2183, avg=76.88, stdev=239.92 00:18:21.494 clat (usec): min=7334, max=12855, avg=10365.83, stdev=1186.80 00:18:21.494 lat (usec): min=7410, max=12888, avg=10442.72, stdev=1173.80 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9241], 00:18:21.494 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[11076], 00:18:21.494 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:18:21.494 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12649], 99.95th=[12649], 00:18:21.494 | 99.99th=[12911] 00:18:21.494 bw ( KiB/s): min=22092, max=27104, per=38.33%, avg=24598.00, stdev=3544.02, samples=2 00:18:21.494 iops : min= 5523, max= 6776, avg=6149.50, stdev=886.00, samples=2 00:18:21.494 lat (usec) : 1000=0.01% 00:18:21.494 lat (msec) : 2=0.14%, 4=0.26%, 10=49.17%, 20=50.41% 00:18:21.494 cpu : usr=8.39%, sys=23.78%, ctx=405, majf=0, minf=13 00:18:21.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:21.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.494 issued rwts: total=5970,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.494 job3: (groupid=0, jobs=1): err= 0: pid=67906: Wed May 15 13:56:19 2024 00:18:21.494 read: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1004msec) 00:18:21.494 slat (usec): min=9, max=7687, avg=151.12, stdev=630.33 00:18:21.494 clat (usec): min=1270, max=37974, avg=18997.76, stdev=5449.60 00:18:21.494 lat (usec): min=4958, max=37994, avg=19148.88, stdev=5502.28 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[ 7504], 5.00th=[12518], 10.00th=[13173], 20.00th=[14615], 00:18:21.494 | 30.00th=[15664], 40.00th=[16319], 50.00th=[17695], 60.00th=[19530], 00:18:21.494 | 70.00th=[21890], 80.00th=[22414], 90.00th=[26084], 95.00th=[31065], 00:18:21.494 | 99.00th=[33162], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:18:21.494 | 99.99th=[38011] 00:18:21.494 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:18:21.494 slat (usec): min=9, max=6554, avg=134.70, stdev=538.84 00:18:21.494 clat (usec): min=10284, max=32698, avg=18462.55, stdev=5898.46 00:18:21.494 lat (usec): min=10319, max=32732, avg=18597.25, stdev=5951.13 00:18:21.494 clat percentiles (usec): 00:18:21.494 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12649], 20.00th=[13173], 00:18:21.494 | 30.00th=[13566], 40.00th=[14615], 50.00th=[15533], 60.00th=[21103], 00:18:21.494 | 70.00th=[21890], 80.00th=[22676], 90.00th=[27395], 95.00th=[31065], 00:18:21.494 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:18:21.494 | 99.99th=[32637] 00:18:21.494 bw ( KiB/s): min=13362, max=15152, per=22.22%, avg=14257.00, stdev=1265.72, samples=2 00:18:21.494 iops : min= 3340, max= 3788, avg=3564.00, stdev=316.78, samples=2 00:18:21.494 lat (msec) : 2=0.01%, 10=0.62%, 20=58.38%, 50=40.99% 00:18:21.494 cpu : usr=3.39%, sys=14.36%, ctx=343, majf=0, minf=7 00:18:21.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:21.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.494 issued rwts: total=3177,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.494 00:18:21.494 Run status group 0 (all jobs): 00:18:21.494 READ: bw=59.4MiB/s (62.3MB/s), 8151KiB/s-23.3MiB/s (8347kB/s-24.4MB/s), io=59.7MiB (62.6MB), run=1002-1005msec 00:18:21.494 WRITE: bw=62.7MiB/s (65.7MB/s), 9079KiB/s-24.0MiB/s (9296kB/s-25.1MB/s), io=63.0MiB (66.0MB), run=1002-1005msec 00:18:21.494 00:18:21.494 Disk stats (read/write): 00:18:21.494 nvme0n1: ios=3634/4096, merge=0/0, ticks=14167/16191, in_queue=30358, util=85.74% 00:18:21.494 nvme0n2: ios=1694/2048, merge=0/0, ticks=15032/19266, in_queue=34298, util=87.49% 00:18:21.494 nvme0n3: ios=4864/5120, merge=0/0, ticks=10960/10097, in_queue=21057, util=88.95% 00:18:21.494 nvme0n4: ios=2560/2957, merge=0/0, ticks=16296/16216, in_queue=32512, util=89.31% 00:18:21.494 13:56:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:21.494 [global] 00:18:21.494 thread=1 00:18:21.494 invalidate=1 00:18:21.494 rw=randwrite 00:18:21.494 time_based=1 00:18:21.494 runtime=1 00:18:21.494 ioengine=libaio 00:18:21.494 direct=1 00:18:21.494 bs=4096 00:18:21.494 iodepth=128 00:18:21.494 norandommap=0 00:18:21.494 numjobs=1 00:18:21.494 00:18:21.494 verify_dump=1 00:18:21.494 verify_backlog=512 00:18:21.494 verify_state_save=0 00:18:21.494 do_verify=1 00:18:21.494 verify=crc32c-intel 00:18:21.494 [job0] 00:18:21.494 filename=/dev/nvme0n1 00:18:21.494 [job1] 00:18:21.494 filename=/dev/nvme0n2 00:18:21.494 [job2] 00:18:21.494 filename=/dev/nvme0n3 00:18:21.494 [job3] 00:18:21.494 filename=/dev/nvme0n4 00:18:21.494 Could not set queue depth (nvme0n1) 00:18:21.494 Could not set queue depth (nvme0n2) 00:18:21.494 Could not set queue depth (nvme0n3) 00:18:21.494 Could not set queue depth (nvme0n4) 00:18:21.752 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.752 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.752 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.752 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.752 fio-3.35 00:18:21.752 Starting 4 threads 00:18:23.128 00:18:23.128 job0: (groupid=0, jobs=1): err= 0: pid=67961: Wed May 15 13:56:21 2024 00:18:23.128 read: IOPS=7463, BW=29.2MiB/s (30.6MB/s)(29.2MiB/1002msec) 00:18:23.128 slat (usec): min=6, max=2861, avg=63.06, stdev=223.86 00:18:23.128 clat (usec): min=486, max=11096, avg=8500.08, stdev=813.84 00:18:23.128 lat (usec): min=1331, max=11114, avg=8563.14, stdev=832.06 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[ 5997], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8160], 00:18:23.128 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:18:23.128 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9634], 00:18:23.128 | 99.00th=[10159], 99.50th=[10290], 99.90th=[10683], 99.95th=[10945], 00:18:23.128 | 99.99th=[11076] 00:18:23.128 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:18:23.128 slat (usec): min=8, max=2285, avg=60.52, stdev=212.86 00:18:23.128 clat (usec): min=6201, max=10870, avg=8231.99, stdev=554.20 00:18:23.128 lat (usec): min=6214, max=10884, avg=8292.51, stdev=585.37 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[ 7046], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7832], 00:18:23.128 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8225], 00:18:23.128 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:18:23.128 | 99.00th=[10028], 99.50th=[10290], 99.90th=[10814], 99.95th=[10814], 00:18:23.128 | 99.99th=[10814] 00:18:23.128 bw ( KiB/s): min=32112, max=32112, per=54.28%, avg=32112.00, stdev= 0.00, samples=1 00:18:23.128 iops : min= 8028, max= 8028, avg=8028.00, stdev= 0.00, samples=1 00:18:23.128 lat (usec) : 500=0.01% 00:18:23.128 lat (msec) : 2=0.15%, 4=0.13%, 10=98.51%, 20=1.21% 00:18:23.128 cpu : usr=5.29%, sys=23.38%, ctx=649, majf=0, minf=11 00:18:23.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:23.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.128 issued rwts: total=7478,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.128 job1: (groupid=0, jobs=1): err= 0: pid=67962: Wed May 15 13:56:21 2024 00:18:23.128 read: IOPS=1326, BW=5307KiB/s (5434kB/s)(5328KiB/1004msec) 00:18:23.128 slat (usec): min=5, max=9870, avg=310.76, stdev=1139.10 00:18:23.128 clat (usec): min=1720, max=58785, avg=38405.63, stdev=8595.29 00:18:23.128 lat (usec): min=3294, max=61726, avg=38716.39, stdev=8610.92 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[ 3720], 5.00th=[27395], 10.00th=[30278], 20.00th=[33817], 00:18:23.128 | 30.00th=[34866], 40.00th=[37487], 50.00th=[39584], 60.00th=[41681], 00:18:23.128 | 70.00th=[43254], 80.00th=[44827], 90.00th=[47449], 95.00th=[49546], 00:18:23.128 | 99.00th=[55313], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:18:23.128 | 99.99th=[58983] 00:18:23.128 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:18:23.128 slat (usec): min=6, max=12096, avg=371.20, stdev=1130.76 00:18:23.128 clat (usec): min=16592, max=84282, avg=49087.56, stdev=17296.53 00:18:23.128 lat (usec): min=16689, max=84315, avg=49458.76, stdev=17409.25 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[21890], 5.00th=[27657], 10.00th=[31851], 20.00th=[33817], 00:18:23.128 | 30.00th=[35390], 40.00th=[36963], 50.00th=[39584], 60.00th=[56886], 00:18:23.128 | 70.00th=[63701], 80.00th=[69731], 90.00th=[73925], 95.00th=[76022], 00:18:23.128 | 99.00th=[79168], 99.50th=[80217], 99.90th=[80217], 99.95th=[84411], 00:18:23.128 | 99.99th=[84411] 00:18:23.128 bw ( KiB/s): min= 5448, max= 6853, per=10.40%, avg=6150.50, stdev=993.49, samples=2 00:18:23.128 iops : min= 1362, max= 1713, avg=1537.50, stdev=248.19, samples=2 00:18:23.128 lat (msec) : 2=0.03%, 4=0.45%, 10=0.21%, 20=1.29%, 50=72.59% 00:18:23.128 lat (msec) : 100=25.42% 00:18:23.128 cpu : usr=1.10%, sys=6.48%, ctx=496, majf=0, minf=13 00:18:23.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:18:23.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.128 issued rwts: total=1332,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.128 job2: (groupid=0, jobs=1): err= 0: pid=67963: Wed May 15 13:56:21 2024 00:18:23.128 read: IOPS=3634, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1004msec) 00:18:23.128 slat (usec): min=2, max=7717, avg=138.77, stdev=724.57 00:18:23.128 clat (usec): min=1591, max=24723, avg=17787.17, stdev=3936.64 00:18:23.128 lat (usec): min=5696, max=24729, avg=17925.94, stdev=3900.71 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[10552], 5.00th=[12518], 10.00th=[13960], 20.00th=[14091], 00:18:23.128 | 30.00th=[14746], 40.00th=[15533], 50.00th=[17171], 60.00th=[19006], 00:18:23.128 | 70.00th=[20317], 80.00th=[22152], 90.00th=[23462], 95.00th=[23725], 00:18:23.128 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:18:23.128 | 99.99th=[24773] 00:18:23.128 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:18:23.128 slat (usec): min=6, max=9206, avg=113.04, stdev=562.15 00:18:23.128 clat (usec): min=8347, max=29308, avg=14901.85, stdev=4312.02 00:18:23.128 lat (usec): min=10457, max=29370, avg=15014.88, stdev=4307.45 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[10159], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:18:23.128 | 30.00th=[11863], 40.00th=[12387], 50.00th=[14484], 60.00th=[15270], 00:18:23.128 | 70.00th=[15926], 80.00th=[17171], 90.00th=[20055], 95.00th=[26084], 00:18:23.128 | 99.00th=[26870], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:18:23.128 | 99.99th=[29230] 00:18:23.128 bw ( KiB/s): min=15880, max=16384, per=27.27%, avg=16132.00, stdev=356.38, samples=2 00:18:23.128 iops : min= 3970, max= 4096, avg=4033.00, stdev=89.10, samples=2 00:18:23.128 lat (msec) : 2=0.01%, 10=0.90%, 20=78.48%, 50=20.61% 00:18:23.128 cpu : usr=3.29%, sys=11.27%, ctx=247, majf=0, minf=17 00:18:23.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:23.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.128 issued rwts: total=3649,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.128 job3: (groupid=0, jobs=1): err= 0: pid=67964: Wed May 15 13:56:21 2024 00:18:23.128 read: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec) 00:18:23.128 slat (usec): min=5, max=7401, avg=324.22, stdev=1068.06 00:18:23.128 clat (usec): min=237, max=60179, avg=37893.74, stdev=9866.84 00:18:23.128 lat (usec): min=287, max=60216, avg=38217.97, stdev=9905.92 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[ 2343], 5.00th=[17957], 10.00th=[27132], 20.00th=[33424], 00:18:23.128 | 30.00th=[34866], 40.00th=[38011], 50.00th=[39584], 60.00th=[41157], 00:18:23.128 | 70.00th=[42730], 80.00th=[45351], 90.00th=[46924], 95.00th=[51119], 00:18:23.128 | 99.00th=[54789], 99.50th=[55313], 99.90th=[57934], 99.95th=[60031], 00:18:23.128 | 99.99th=[60031] 00:18:23.128 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:23.128 slat (usec): min=7, max=12040, avg=322.32, stdev=1085.80 00:18:23.128 clat (usec): min=17243, max=82746, avg=44630.07, stdev=22501.80 00:18:23.128 lat (usec): min=19247, max=85743, avg=44952.39, stdev=22664.40 00:18:23.128 clat percentiles (usec): 00:18:23.128 | 1.00th=[19268], 5.00th=[19792], 10.00th=[19792], 20.00th=[20055], 00:18:23.128 | 30.00th=[24511], 40.00th=[27395], 50.00th=[37487], 60.00th=[56886], 00:18:23.128 | 70.00th=[65799], 80.00th=[71828], 90.00th=[73925], 95.00th=[77071], 00:18:23.128 | 99.00th=[79168], 99.50th=[80217], 99.90th=[81265], 99.95th=[82314], 00:18:23.128 | 99.99th=[82314] 00:18:23.128 bw ( KiB/s): min= 8128, max= 8128, per=13.74%, avg=8128.00, stdev= 0.00, samples=1 00:18:23.128 iops : min= 2032, max= 2032, avg=2032.00, stdev= 0.00, samples=1 00:18:23.128 lat (usec) : 250=0.03%, 500=0.03% 00:18:23.128 lat (msec) : 2=0.20%, 4=0.85%, 10=0.39%, 20=9.14%, 50=63.50% 00:18:23.128 lat (msec) : 100=25.85% 00:18:23.128 cpu : usr=2.20%, sys=6.10%, ctx=454, majf=0, minf=7 00:18:23.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:23.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.129 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.129 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.129 00:18:23.129 Run status group 0 (all jobs): 00:18:23.129 READ: bw=54.3MiB/s (57.0MB/s), 5307KiB/s-29.2MiB/s (5434kB/s-30.6MB/s), io=54.5MiB (57.2MB), run=1001-1004msec 00:18:23.129 WRITE: bw=57.8MiB/s (60.6MB/s), 6120KiB/s-29.9MiB/s (6266kB/s-31.4MB/s), io=58.0MiB (60.8MB), run=1001-1004msec 00:18:23.129 00:18:23.129 Disk stats (read/write): 00:18:23.129 nvme0n1: ios=6592/6656, merge=0/0, ticks=17089/14126, in_queue=31215, util=88.67% 00:18:23.129 nvme0n2: ios=1073/1387, merge=0/0, ticks=12450/22357, in_queue=34807, util=89.09% 00:18:23.129 nvme0n3: ios=3231/3584, merge=0/0, ticks=13387/11585, in_queue=24972, util=89.95% 00:18:23.129 nvme0n4: ios=1060/1536, merge=0/0, ticks=13698/20875, in_queue=34573, util=89.50% 00:18:23.129 13:56:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:23.129 13:56:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:23.129 13:56:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67983 00:18:23.129 13:56:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:23.129 [global] 00:18:23.129 thread=1 00:18:23.129 invalidate=1 00:18:23.129 rw=read 00:18:23.129 time_based=1 00:18:23.129 runtime=10 00:18:23.129 ioengine=libaio 00:18:23.129 direct=1 00:18:23.129 bs=4096 00:18:23.129 iodepth=1 00:18:23.129 norandommap=1 00:18:23.129 numjobs=1 00:18:23.129 00:18:23.129 [job0] 00:18:23.129 filename=/dev/nvme0n1 00:18:23.129 [job1] 00:18:23.129 filename=/dev/nvme0n2 00:18:23.129 [job2] 00:18:23.129 filename=/dev/nvme0n3 00:18:23.129 [job3] 00:18:23.129 filename=/dev/nvme0n4 00:18:23.129 Could not set queue depth (nvme0n1) 00:18:23.129 Could not set queue depth (nvme0n2) 00:18:23.129 Could not set queue depth (nvme0n3) 00:18:23.129 Could not set queue depth (nvme0n4) 00:18:23.129 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.129 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.129 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.129 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:23.129 fio-3.35 00:18:23.129 Starting 4 threads 00:18:26.435 13:56:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:26.435 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45031424, buflen=4096 00:18:26.435 fio: pid=68026, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.435 13:56:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:26.435 fio: pid=68025, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.435 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=80486400, buflen=4096 00:18:26.435 13:56:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.435 13:56:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:26.435 fio: pid=68023, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.435 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=53403648, buflen=4096 00:18:26.778 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.778 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:26.778 fio: pid=68024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.778 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=21929984, buflen=4096 00:18:26.778 00:18:26.778 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68023: Wed May 15 13:56:25 2024 00:18:26.778 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(50.9MiB/3190msec) 00:18:26.778 slat (usec): min=6, max=16101, avg=12.44, stdev=199.08 00:18:26.778 clat (usec): min=98, max=3798, avg=231.52, stdev=66.00 00:18:26.778 lat (usec): min=107, max=16406, avg=243.96, stdev=209.99 00:18:26.778 clat percentiles (usec): 00:18:26.778 | 1.00th=[ 125], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 215], 00:18:26.778 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:18:26.778 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:18:26.779 | 99.00th=[ 318], 99.50th=[ 359], 99.90th=[ 693], 99.95th=[ 1156], 00:18:26.779 | 99.99th=[ 3064] 00:18:26.779 bw ( KiB/s): min=15656, max=16861, per=21.19%, avg=16199.17, stdev=463.44, samples=6 00:18:26.779 iops : min= 3914, max= 4215, avg=4049.67, stdev=115.71, samples=6 00:18:26.779 lat (usec) : 100=0.04%, 250=75.82%, 500=23.97%, 750=0.08%, 1000=0.04% 00:18:26.779 lat (msec) : 2=0.02%, 4=0.03% 00:18:26.779 cpu : usr=0.75%, sys=3.42%, ctx=13046, majf=0, minf=1 00:18:26.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 issued rwts: total=13039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.779 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68024: Wed May 15 13:56:25 2024 00:18:26.779 read: IOPS=6350, BW=24.8MiB/s (26.0MB/s)(84.9MiB/3423msec) 00:18:26.779 slat (usec): min=5, max=10846, avg=10.92, stdev=151.16 00:18:26.779 clat (usec): min=93, max=7594, avg=145.87, stdev=77.01 00:18:26.779 lat (usec): min=101, max=11008, avg=156.78, stdev=170.31 00:18:26.779 clat percentiles (usec): 00:18:26.779 | 1.00th=[ 105], 5.00th=[ 119], 10.00th=[ 125], 20.00th=[ 130], 00:18:26.779 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:18:26.779 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 188], 00:18:26.779 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 594], 99.95th=[ 1156], 00:18:26.779 | 99.99th=[ 3654] 00:18:26.779 bw ( KiB/s): min=20038, max=27105, per=33.40%, avg=25533.17, stdev=2754.98, samples=6 00:18:26.779 iops : min= 5009, max= 6776, avg=6383.17, stdev=688.92, samples=6 00:18:26.779 lat (usec) : 100=0.24%, 250=98.46%, 500=1.17%, 750=0.05%, 1000=0.02% 00:18:26.779 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:18:26.779 cpu : usr=1.23%, sys=4.70%, ctx=21752, majf=0, minf=1 00:18:26.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 issued rwts: total=21739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.779 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68025: Wed May 15 13:56:25 2024 00:18:26.779 read: IOPS=6545, BW=25.6MiB/s (26.8MB/s)(76.8MiB/3002msec) 00:18:26.779 slat (usec): min=6, max=8859, avg= 8.75, stdev=83.44 00:18:26.779 clat (usec): min=103, max=1875, avg=143.41, stdev=26.21 00:18:26.779 lat (usec): min=112, max=9016, avg=152.17, stdev=87.64 00:18:26.779 clat percentiles (usec): 00:18:26.779 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:18:26.779 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:18:26.779 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:18:26.779 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 277], 99.95th=[ 433], 00:18:26.779 | 99.99th=[ 1598] 00:18:26.779 bw ( KiB/s): min=25800, max=26802, per=34.44%, avg=26325.20, stdev=418.74, samples=5 00:18:26.779 iops : min= 6450, max= 6700, avg=6581.20, stdev=104.54, samples=5 00:18:26.779 lat (usec) : 250=99.87%, 500=0.09%, 750=0.01%, 1000=0.01% 00:18:26.779 lat (msec) : 2=0.02% 00:18:26.779 cpu : usr=1.00%, sys=5.00%, ctx=19657, majf=0, minf=1 00:18:26.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 issued rwts: total=19651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.779 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68026: Wed May 15 13:56:25 2024 00:18:26.779 read: IOPS=3916, BW=15.3MiB/s (16.0MB/s)(42.9MiB/2807msec) 00:18:26.779 slat (usec): min=6, max=137, avg= 9.06, stdev= 3.49 00:18:26.779 clat (usec): min=121, max=6616, avg=245.28, stdev=117.36 00:18:26.779 lat (usec): min=129, max=6624, avg=254.33, stdev=117.60 00:18:26.779 clat percentiles (usec): 00:18:26.779 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:18:26.779 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:18:26.779 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:18:26.779 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 1418], 99.95th=[ 3261], 00:18:26.779 | 99.99th=[ 6390] 00:18:26.779 bw ( KiB/s): min=15296, max=16176, per=20.56%, avg=15716.80, stdev=316.26, samples=5 00:18:26.779 iops : min= 3824, max= 4044, avg=3929.20, stdev=79.06, samples=5 00:18:26.779 lat (usec) : 250=71.97%, 500=27.80%, 750=0.07%, 1000=0.05% 00:18:26.779 lat (msec) : 2=0.02%, 4=0.05%, 10=0.03% 00:18:26.779 cpu : usr=0.78%, sys=3.28%, ctx=11001, majf=0, minf=2 00:18:26.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.779 issued rwts: total=10995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.779 00:18:26.779 Run status group 0 (all jobs): 00:18:26.779 READ: bw=74.7MiB/s (78.3MB/s), 15.3MiB/s-25.6MiB/s (16.0MB/s-26.8MB/s), io=256MiB (268MB), run=2807-3423msec 00:18:26.779 00:18:26.779 Disk stats (read/write): 00:18:26.779 nvme0n1: ios=12668/0, merge=0/0, ticks=2929/0, in_queue=2929, util=94.92% 00:18:26.779 nvme0n2: ios=21384/0, merge=0/0, ticks=3121/0, in_queue=3121, util=95.07% 00:18:26.779 nvme0n3: ios=18798/0, merge=0/0, ticks=2740/0, in_queue=2740, util=96.17% 00:18:26.779 nvme0n4: ios=10281/0, merge=0/0, ticks=2506/0, in_queue=2506, util=96.14% 00:18:26.779 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.779 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:27.037 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.037 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:27.297 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.297 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:27.555 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.555 13:56:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:27.555 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.555 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 67983 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:27.814 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.072 nvmf hotplug test: fio failed as expected 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.072 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.072 rmmod nvme_tcp 00:18:28.331 rmmod nvme_fabrics 00:18:28.331 rmmod nvme_keyring 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67600 ']' 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67600 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 67600 ']' 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 67600 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:28.331 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67600 00:18:28.332 killing process with pid 67600 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67600' 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 67600 00:18:28.332 [2024-05-15 13:56:26.719591] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:28.332 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 67600 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.591 13:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:28.591 ************************************ 00:18:28.591 END TEST nvmf_fio_target 00:18:28.591 ************************************ 00:18:28.591 00:18:28.591 real 0m18.602s 00:18:28.591 user 1m8.160s 00:18:28.591 sys 0m10.651s 00:18:28.591 13:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:28.591 13:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.591 13:56:27 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:28.591 13:56:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:28.591 13:56:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:28.591 13:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.591 ************************************ 00:18:28.591 START TEST nvmf_bdevio 00:18:28.591 ************************************ 00:18:28.591 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:28.851 * Looking for test storage... 00:18:28.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.851 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:28.852 Cannot find device "nvmf_tgt_br" 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.852 Cannot find device "nvmf_tgt_br2" 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:28.852 Cannot find device "nvmf_tgt_br" 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:28.852 Cannot find device "nvmf_tgt_br2" 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:18:28.852 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.112 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:29.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:29.112 00:18:29.112 --- 10.0.0.2 ping statistics --- 00:18:29.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.113 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:29.113 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:29.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:18:29.113 00:18:29.113 --- 10.0.0.3 ping statistics --- 00:18:29.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.113 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:29.113 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:29.371 00:18:29.371 --- 10.0.0.1 ping statistics --- 00:18:29.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.371 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68291 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68291 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 68291 ']' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.371 13:56:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.371 [2024-05-15 13:56:27.775857] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:18:29.371 [2024-05-15 13:56:27.776144] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.371 [2024-05-15 13:56:27.920316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.630 [2024-05-15 13:56:28.009173] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.630 [2024-05-15 13:56:28.009219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.630 [2024-05-15 13:56:28.009229] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.630 [2024-05-15 13:56:28.009237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.630 [2024-05-15 13:56:28.009244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.630 [2024-05-15 13:56:28.009441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:29.630 [2024-05-15 13:56:28.010303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:29.630 [2024-05-15 13:56:28.010418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.630 [2024-05-15 13:56:28.010420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.204 [2024-05-15 13:56:28.683220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.204 Malloc0 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.204 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.463 [2024-05-15 13:56:28.763179] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.463 [2024-05-15 13:56:28.763556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.463 { 00:18:30.463 "params": { 00:18:30.463 "name": "Nvme$subsystem", 00:18:30.463 "trtype": "$TEST_TRANSPORT", 00:18:30.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.463 "adrfam": "ipv4", 00:18:30.463 "trsvcid": "$NVMF_PORT", 00:18:30.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.463 "hdgst": ${hdgst:-false}, 00:18:30.463 "ddgst": ${ddgst:-false} 00:18:30.463 }, 00:18:30.463 "method": "bdev_nvme_attach_controller" 00:18:30.463 } 00:18:30.463 EOF 00:18:30.463 )") 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:30.463 13:56:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.463 "params": { 00:18:30.463 "name": "Nvme1", 00:18:30.463 "trtype": "tcp", 00:18:30.463 "traddr": "10.0.0.2", 00:18:30.463 "adrfam": "ipv4", 00:18:30.463 "trsvcid": "4420", 00:18:30.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.463 "hdgst": false, 00:18:30.463 "ddgst": false 00:18:30.463 }, 00:18:30.463 "method": "bdev_nvme_attach_controller" 00:18:30.463 }' 00:18:30.463 [2024-05-15 13:56:28.818230] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:18:30.463 [2024-05-15 13:56:28.818432] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68327 ] 00:18:30.463 [2024-05-15 13:56:28.960110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:30.722 [2024-05-15 13:56:29.061103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.722 [2024-05-15 13:56:29.061290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.722 [2024-05-15 13:56:29.061291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.722 I/O targets: 00:18:30.722 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:30.722 00:18:30.722 00:18:30.722 CUnit - A unit testing framework for C - Version 2.1-3 00:18:30.722 http://cunit.sourceforge.net/ 00:18:30.722 00:18:30.722 00:18:30.722 Suite: bdevio tests on: Nvme1n1 00:18:30.722 Test: blockdev write read block ...passed 00:18:30.722 Test: blockdev write zeroes read block ...passed 00:18:30.722 Test: blockdev write zeroes read no split ...passed 00:18:30.722 Test: blockdev write zeroes read split ...passed 00:18:30.722 Test: blockdev write zeroes read split partial ...passed 00:18:30.722 Test: blockdev reset ...[2024-05-15 13:56:29.249660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.722 [2024-05-15 13:56:29.249920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf51520 (9): Bad file descriptor 00:18:30.722 [2024-05-15 13:56:29.260884] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.722 passed 00:18:30.722 Test: blockdev write read 8 blocks ...passed 00:18:30.722 Test: blockdev write read size > 128k ...passed 00:18:30.722 Test: blockdev write read invalid size ...passed 00:18:30.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:30.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:30.722 Test: blockdev write read max offset ...passed 00:18:30.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:30.722 Test: blockdev writev readv 8 blocks ...passed 00:18:30.722 Test: blockdev writev readv 30 x 1block ...passed 00:18:30.722 Test: blockdev writev readv block ...passed 00:18:30.722 Test: blockdev writev readv size > 128k ...passed 00:18:30.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:30.722 Test: blockdev comparev and writev ...[2024-05-15 13:56:29.269525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.722 [2024-05-15 13:56:29.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.722 [2024-05-15 13:56:29.269895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.722 [2024-05-15 13:56:29.269958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.270301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.270501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.270691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.270876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.271167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.271307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.271421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.271567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.271957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.272120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.272330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.723 [2024-05-15 13:56:29.272494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 passed 00:18:30.723 Test: blockdev nvme passthru rw ...cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.723 passed 00:18:30.723 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:56:29.273606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.723 [2024-05-15 13:56:29.273693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.273859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.723 [2024-05-15 13:56:29.273918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.274041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.723 [2024-05-15 13:56:29.274088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.723 [2024-05-15 13:56:29.274208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.723 [2024-05-15 13:56:29.274261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.723 passed 00:18:31.045 Test: blockdev nvme admin passthru ...passed 00:18:31.045 Test: blockdev copy ...passed 00:18:31.045 00:18:31.045 Run Summary: Type Total Ran Passed Failed Inactive 00:18:31.045 suites 1 1 n/a 0 0 00:18:31.045 tests 23 23 23 0 0 00:18:31.045 asserts 152 152 152 0 n/a 00:18:31.045 00:18:31.045 Elapsed time = 0.141 seconds 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.045 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.305 rmmod nvme_tcp 00:18:31.305 rmmod nvme_fabrics 00:18:31.305 rmmod nvme_keyring 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68291 ']' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68291 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 68291 ']' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 68291 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68291 00:18:31.305 killing process with pid 68291 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68291' 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 68291 00:18:31.305 [2024-05-15 13:56:29.658671] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.305 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 68291 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.564 00:18:31.564 real 0m2.849s 00:18:31.564 user 0m8.630s 00:18:31.564 sys 0m0.876s 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:31.564 ************************************ 00:18:31.564 END TEST nvmf_bdevio 00:18:31.564 ************************************ 00:18:31.564 13:56:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.564 13:56:30 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.564 13:56:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:31.564 13:56:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:31.564 13:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.564 ************************************ 00:18:31.564 START TEST nvmf_auth_target 00:18:31.564 ************************************ 00:18:31.564 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.824 * Looking for test storage... 00:18:31.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.824 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.825 Cannot find device "nvmf_tgt_br" 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.825 Cannot find device "nvmf_tgt_br2" 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.825 Cannot find device "nvmf_tgt_br" 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.825 Cannot find device "nvmf_tgt_br2" 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.825 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:32.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:32.085 00:18:32.085 --- 10.0.0.2 ping statistics --- 00:18:32.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.085 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:32.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:18:32.085 00:18:32.085 --- 10.0.0.3 ping statistics --- 00:18:32.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.085 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:18:32.085 00:18:32.085 --- 10.0.0.1 ping statistics --- 00:18:32.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.085 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:32.085 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68506 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68506 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 68506 ']' 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.390 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=68529 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2d382ba428fd249626cdb8f1aaca310bd5fd10ab308ff08d 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wuc 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2d382ba428fd249626cdb8f1aaca310bd5fd10ab308ff08d 0 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2d382ba428fd249626cdb8f1aaca310bd5fd10ab308ff08d 0 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2d382ba428fd249626cdb8f1aaca310bd5fd10ab308ff08d 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wuc 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wuc 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.wuc 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9fba467d96039c6299f7ee49587b8619 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Aex 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9fba467d96039c6299f7ee49587b8619 1 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9fba467d96039c6299f7ee49587b8619 1 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9fba467d96039c6299f7ee49587b8619 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Aex 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Aex 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.Aex 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.328 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a27f0d4bf71648d32a69fe68e2d208c98fb815fd789d969e 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FSD 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a27f0d4bf71648d32a69fe68e2d208c98fb815fd789d969e 2 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a27f0d4bf71648d32a69fe68e2d208c98fb815fd789d969e 2 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a27f0d4bf71648d32a69fe68e2d208c98fb815fd789d969e 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FSD 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FSD 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.FSD 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=664af3b3407be19f46e93cb97070ba0727180829b111ce02c014829104ef537f 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kL4 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 664af3b3407be19f46e93cb97070ba0727180829b111ce02c014829104ef537f 3 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 664af3b3407be19f46e93cb97070ba0727180829b111ce02c014829104ef537f 3 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=664af3b3407be19f46e93cb97070ba0727180829b111ce02c014829104ef537f 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kL4 00:18:33.329 13:56:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kL4 00:18:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.kL4 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 68506 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 68506 ']' 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.589 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 68529 /var/tmp/host.sock 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 68529 ']' 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.589 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:18:33.849 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Aex 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Aex 00:18:34.108 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Aex 00:18:34.367 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:34.367 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FSD 00:18:34.367 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FSD 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FSD 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kL4 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kL4 00:18:34.368 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kL4 00:18:34.626 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:34.626 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.626 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:34.626 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.626 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.887 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:35.145 00:18:35.145 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:35.145 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.145 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:35.404 { 00:18:35.404 "cntlid": 1, 00:18:35.404 "qid": 0, 00:18:35.404 "state": "enabled", 00:18:35.404 "listen_address": { 00:18:35.404 "trtype": "TCP", 00:18:35.404 "adrfam": "IPv4", 00:18:35.404 "traddr": "10.0.0.2", 00:18:35.404 "trsvcid": "4420" 00:18:35.404 }, 00:18:35.404 "peer_address": { 00:18:35.404 "trtype": "TCP", 00:18:35.404 "adrfam": "IPv4", 00:18:35.404 "traddr": "10.0.0.1", 00:18:35.404 "trsvcid": "43110" 00:18:35.404 }, 00:18:35.404 "auth": { 00:18:35.404 "state": "completed", 00:18:35.404 "digest": "sha256", 00:18:35.404 "dhgroup": "null" 00:18:35.404 } 00:18:35.404 } 00:18:35.404 ]' 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.404 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.663 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:39.852 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:39.852 00:18:39.852 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:39.852 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.853 { 00:18:39.853 "cntlid": 3, 00:18:39.853 "qid": 0, 00:18:39.853 "state": "enabled", 00:18:39.853 "listen_address": { 00:18:39.853 "trtype": "TCP", 00:18:39.853 "adrfam": "IPv4", 00:18:39.853 "traddr": "10.0.0.2", 00:18:39.853 "trsvcid": "4420" 00:18:39.853 }, 00:18:39.853 "peer_address": { 00:18:39.853 "trtype": "TCP", 00:18:39.853 "adrfam": "IPv4", 00:18:39.853 "traddr": "10.0.0.1", 00:18:39.853 "trsvcid": "38576" 00:18:39.853 }, 00:18:39.853 "auth": { 00:18:39.853 "state": "completed", 00:18:39.853 "digest": "sha256", 00:18:39.853 "dhgroup": "null" 00:18:39.853 } 00:18:39.853 } 00:18:39.853 ]' 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.853 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.112 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:40.112 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.112 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.112 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.112 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.371 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:41.197 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:41.457 00:18:41.457 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:41.457 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.457 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.457 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.457 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.457 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.457 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.716 { 00:18:41.716 "cntlid": 5, 00:18:41.716 "qid": 0, 00:18:41.716 "state": "enabled", 00:18:41.716 "listen_address": { 00:18:41.716 "trtype": "TCP", 00:18:41.716 "adrfam": "IPv4", 00:18:41.716 "traddr": "10.0.0.2", 00:18:41.716 "trsvcid": "4420" 00:18:41.716 }, 00:18:41.716 "peer_address": { 00:18:41.716 "trtype": "TCP", 00:18:41.716 "adrfam": "IPv4", 00:18:41.716 "traddr": "10.0.0.1", 00:18:41.716 "trsvcid": "38600" 00:18:41.716 }, 00:18:41.716 "auth": { 00:18:41.716 "state": "completed", 00:18:41.716 "digest": "sha256", 00:18:41.716 "dhgroup": "null" 00:18:41.716 } 00:18:41.716 } 00:18:41.716 ]' 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.716 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.993 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.569 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.829 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.089 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.089 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.089 { 00:18:43.089 "cntlid": 7, 00:18:43.089 "qid": 0, 00:18:43.089 "state": "enabled", 00:18:43.089 "listen_address": { 00:18:43.089 "trtype": "TCP", 00:18:43.089 "adrfam": "IPv4", 00:18:43.089 "traddr": "10.0.0.2", 00:18:43.089 "trsvcid": "4420" 00:18:43.089 }, 00:18:43.089 "peer_address": { 00:18:43.089 "trtype": "TCP", 00:18:43.089 "adrfam": "IPv4", 00:18:43.089 "traddr": "10.0.0.1", 00:18:43.089 "trsvcid": "38632" 00:18:43.089 }, 00:18:43.089 "auth": { 00:18:43.089 "state": "completed", 00:18:43.089 "digest": "sha256", 00:18:43.089 "dhgroup": "null" 00:18:43.089 } 00:18:43.089 } 00:18:43.089 ]' 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.348 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.607 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.174 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.433 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.692 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.692 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.951 { 00:18:44.951 "cntlid": 9, 00:18:44.951 "qid": 0, 00:18:44.951 "state": "enabled", 00:18:44.951 "listen_address": { 00:18:44.951 "trtype": "TCP", 00:18:44.951 "adrfam": "IPv4", 00:18:44.951 "traddr": "10.0.0.2", 00:18:44.951 "trsvcid": "4420" 00:18:44.951 }, 00:18:44.951 "peer_address": { 00:18:44.951 "trtype": "TCP", 00:18:44.951 "adrfam": "IPv4", 00:18:44.951 "traddr": "10.0.0.1", 00:18:44.951 "trsvcid": "38650" 00:18:44.951 }, 00:18:44.951 "auth": { 00:18:44.951 "state": "completed", 00:18:44.951 "digest": "sha256", 00:18:44.951 "dhgroup": "ffdhe2048" 00:18:44.951 } 00:18:44.951 } 00:18:44.951 ]' 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.951 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.257 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:18:45.824 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:45.825 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.083 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:46.342 { 00:18:46.342 "cntlid": 11, 00:18:46.342 "qid": 0, 00:18:46.342 "state": "enabled", 00:18:46.342 "listen_address": { 00:18:46.342 "trtype": "TCP", 00:18:46.342 "adrfam": "IPv4", 00:18:46.342 "traddr": "10.0.0.2", 00:18:46.342 "trsvcid": "4420" 00:18:46.342 }, 00:18:46.342 "peer_address": { 00:18:46.342 "trtype": "TCP", 00:18:46.342 "adrfam": "IPv4", 00:18:46.342 "traddr": "10.0.0.1", 00:18:46.342 "trsvcid": "38678" 00:18:46.342 }, 00:18:46.342 "auth": { 00:18:46.342 "state": "completed", 00:18:46.342 "digest": "sha256", 00:18:46.342 "dhgroup": "ffdhe2048" 00:18:46.342 } 00:18:46.342 } 00:18:46.342 ]' 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.342 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:46.601 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.601 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:46.601 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.601 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.601 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.859 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.424 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.681 00:18:47.681 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:47.682 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.682 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.939 { 00:18:47.939 "cntlid": 13, 00:18:47.939 "qid": 0, 00:18:47.939 "state": "enabled", 00:18:47.939 "listen_address": { 00:18:47.939 "trtype": "TCP", 00:18:47.939 "adrfam": "IPv4", 00:18:47.939 "traddr": "10.0.0.2", 00:18:47.939 "trsvcid": "4420" 00:18:47.939 }, 00:18:47.939 "peer_address": { 00:18:47.939 "trtype": "TCP", 00:18:47.939 "adrfam": "IPv4", 00:18:47.939 "traddr": "10.0.0.1", 00:18:47.939 "trsvcid": "53896" 00:18:47.939 }, 00:18:47.939 "auth": { 00:18:47.939 "state": "completed", 00:18:47.939 "digest": "sha256", 00:18:47.939 "dhgroup": "ffdhe2048" 00:18:47.939 } 00:18:47.939 } 00:18:47.939 ]' 00:18:47.939 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.204 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.475 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.041 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.299 00:18:49.299 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.299 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.299 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.556 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.556 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.556 { 00:18:49.556 "cntlid": 15, 00:18:49.556 "qid": 0, 00:18:49.556 "state": "enabled", 00:18:49.556 "listen_address": { 00:18:49.556 "trtype": "TCP", 00:18:49.556 "adrfam": "IPv4", 00:18:49.556 "traddr": "10.0.0.2", 00:18:49.556 "trsvcid": "4420" 00:18:49.556 }, 00:18:49.556 "peer_address": { 00:18:49.556 "trtype": "TCP", 00:18:49.556 "adrfam": "IPv4", 00:18:49.556 "traddr": "10.0.0.1", 00:18:49.556 "trsvcid": "53926" 00:18:49.556 }, 00:18:49.556 "auth": { 00:18:49.556 "state": "completed", 00:18:49.556 "digest": "sha256", 00:18:49.556 "dhgroup": "ffdhe2048" 00:18:49.556 } 00:18:49.556 } 00:18:49.556 ]' 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.556 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.814 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.814 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.814 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.814 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.814 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:18:50.379 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.636 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.636 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.895 00:18:50.895 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.895 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.895 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.153 { 00:18:51.153 "cntlid": 17, 00:18:51.153 "qid": 0, 00:18:51.153 "state": "enabled", 00:18:51.153 "listen_address": { 00:18:51.153 "trtype": "TCP", 00:18:51.153 "adrfam": "IPv4", 00:18:51.153 "traddr": "10.0.0.2", 00:18:51.153 "trsvcid": "4420" 00:18:51.153 }, 00:18:51.153 "peer_address": { 00:18:51.153 "trtype": "TCP", 00:18:51.153 "adrfam": "IPv4", 00:18:51.153 "traddr": "10.0.0.1", 00:18:51.153 "trsvcid": "53948" 00:18:51.153 }, 00:18:51.153 "auth": { 00:18:51.153 "state": "completed", 00:18:51.153 "digest": "sha256", 00:18:51.153 "dhgroup": "ffdhe3072" 00:18:51.153 } 00:18:51.153 } 00:18:51.153 ]' 00:18:51.153 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.412 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.671 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.240 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:52.499 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:52.758 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.758 { 00:18:52.758 "cntlid": 19, 00:18:52.758 "qid": 0, 00:18:52.758 "state": "enabled", 00:18:52.758 "listen_address": { 00:18:52.758 "trtype": "TCP", 00:18:52.758 "adrfam": "IPv4", 00:18:52.758 "traddr": "10.0.0.2", 00:18:52.758 "trsvcid": "4420" 00:18:52.758 }, 00:18:52.758 "peer_address": { 00:18:52.758 "trtype": "TCP", 00:18:52.758 "adrfam": "IPv4", 00:18:52.758 "traddr": "10.0.0.1", 00:18:52.758 "trsvcid": "53966" 00:18:52.758 }, 00:18:52.758 "auth": { 00:18:52.758 "state": "completed", 00:18:52.758 "digest": "sha256", 00:18:52.758 "dhgroup": "ffdhe3072" 00:18:52.758 } 00:18:52.758 } 00:18:52.758 ]' 00:18:52.758 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.017 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.275 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.842 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:54.103 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:54.362 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.362 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.621 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.621 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.621 { 00:18:54.621 "cntlid": 21, 00:18:54.621 "qid": 0, 00:18:54.621 "state": "enabled", 00:18:54.621 "listen_address": { 00:18:54.621 "trtype": "TCP", 00:18:54.621 "adrfam": "IPv4", 00:18:54.621 "traddr": "10.0.0.2", 00:18:54.621 "trsvcid": "4420" 00:18:54.621 }, 00:18:54.621 "peer_address": { 00:18:54.621 "trtype": "TCP", 00:18:54.621 "adrfam": "IPv4", 00:18:54.621 "traddr": "10.0.0.1", 00:18:54.621 "trsvcid": "53980" 00:18:54.621 }, 00:18:54.621 "auth": { 00:18:54.621 "state": "completed", 00:18:54.621 "digest": "sha256", 00:18:54.621 "dhgroup": "ffdhe3072" 00:18:54.621 } 00:18:54.621 } 00:18:54.621 ]' 00:18:54.621 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.621 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.621 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.621 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.621 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.621 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.621 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.621 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.879 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.445 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.704 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.962 00:18:55.962 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.962 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.962 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:56.221 { 00:18:56.221 "cntlid": 23, 00:18:56.221 "qid": 0, 00:18:56.221 "state": "enabled", 00:18:56.221 "listen_address": { 00:18:56.221 "trtype": "TCP", 00:18:56.221 "adrfam": "IPv4", 00:18:56.221 "traddr": "10.0.0.2", 00:18:56.221 "trsvcid": "4420" 00:18:56.221 }, 00:18:56.221 "peer_address": { 00:18:56.221 "trtype": "TCP", 00:18:56.221 "adrfam": "IPv4", 00:18:56.221 "traddr": "10.0.0.1", 00:18:56.221 "trsvcid": "54012" 00:18:56.221 }, 00:18:56.221 "auth": { 00:18:56.221 "state": "completed", 00:18:56.221 "digest": "sha256", 00:18:56.221 "dhgroup": "ffdhe3072" 00:18:56.221 } 00:18:56.221 } 00:18:56.221 ]' 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.221 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.479 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.070 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:57.329 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:57.587 00:18:57.587 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.587 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.587 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.846 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:57.846 { 00:18:57.846 "cntlid": 25, 00:18:57.846 "qid": 0, 00:18:57.846 "state": "enabled", 00:18:57.846 "listen_address": { 00:18:57.846 "trtype": "TCP", 00:18:57.846 "adrfam": "IPv4", 00:18:57.846 "traddr": "10.0.0.2", 00:18:57.846 "trsvcid": "4420" 00:18:57.846 }, 00:18:57.846 "peer_address": { 00:18:57.846 "trtype": "TCP", 00:18:57.846 "adrfam": "IPv4", 00:18:57.846 "traddr": "10.0.0.1", 00:18:57.846 "trsvcid": "58358" 00:18:57.846 }, 00:18:57.846 "auth": { 00:18:57.846 "state": "completed", 00:18:57.846 "digest": "sha256", 00:18:57.846 "dhgroup": "ffdhe4096" 00:18:57.846 } 00:18:57.846 } 00:18:57.846 ]' 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.847 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.106 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.674 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:58.935 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:59.235 00:18:59.235 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.235 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.235 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:59.497 { 00:18:59.497 "cntlid": 27, 00:18:59.497 "qid": 0, 00:18:59.497 "state": "enabled", 00:18:59.497 "listen_address": { 00:18:59.497 "trtype": "TCP", 00:18:59.497 "adrfam": "IPv4", 00:18:59.497 "traddr": "10.0.0.2", 00:18:59.497 "trsvcid": "4420" 00:18:59.497 }, 00:18:59.497 "peer_address": { 00:18:59.497 "trtype": "TCP", 00:18:59.497 "adrfam": "IPv4", 00:18:59.497 "traddr": "10.0.0.1", 00:18:59.497 "trsvcid": "58382" 00:18:59.497 }, 00:18:59.497 "auth": { 00:18:59.497 "state": "completed", 00:18:59.497 "digest": "sha256", 00:18:59.497 "dhgroup": "ffdhe4096" 00:18:59.497 } 00:18:59.497 } 00:18:59.497 ]' 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.497 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.759 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.327 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.586 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.586 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.586 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:00.586 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:00.845 00:19:00.845 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.845 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.845 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:01.105 { 00:19:01.105 "cntlid": 29, 00:19:01.105 "qid": 0, 00:19:01.105 "state": "enabled", 00:19:01.105 "listen_address": { 00:19:01.105 "trtype": "TCP", 00:19:01.105 "adrfam": "IPv4", 00:19:01.105 "traddr": "10.0.0.2", 00:19:01.105 "trsvcid": "4420" 00:19:01.105 }, 00:19:01.105 "peer_address": { 00:19:01.105 "trtype": "TCP", 00:19:01.105 "adrfam": "IPv4", 00:19:01.105 "traddr": "10.0.0.1", 00:19:01.105 "trsvcid": "58404" 00:19:01.105 }, 00:19:01.105 "auth": { 00:19:01.105 "state": "completed", 00:19:01.105 "digest": "sha256", 00:19:01.105 "dhgroup": "ffdhe4096" 00:19:01.105 } 00:19:01.105 } 00:19:01.105 ]' 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.105 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.365 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.930 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.188 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.447 00:19:02.706 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.706 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.707 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.707 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.707 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.707 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.707 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.966 { 00:19:02.966 "cntlid": 31, 00:19:02.966 "qid": 0, 00:19:02.966 "state": "enabled", 00:19:02.966 "listen_address": { 00:19:02.966 "trtype": "TCP", 00:19:02.966 "adrfam": "IPv4", 00:19:02.966 "traddr": "10.0.0.2", 00:19:02.966 "trsvcid": "4420" 00:19:02.966 }, 00:19:02.966 "peer_address": { 00:19:02.966 "trtype": "TCP", 00:19:02.966 "adrfam": "IPv4", 00:19:02.966 "traddr": "10.0.0.1", 00:19:02.966 "trsvcid": "58432" 00:19:02.966 }, 00:19:02.966 "auth": { 00:19:02.966 "state": "completed", 00:19:02.966 "digest": "sha256", 00:19:02.966 "dhgroup": "ffdhe4096" 00:19:02.966 } 00:19:02.966 } 00:19:02.966 ]' 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.966 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.288 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:03.859 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:04.118 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:04.377 00:19:04.377 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.377 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.377 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:04.637 { 00:19:04.637 "cntlid": 33, 00:19:04.637 "qid": 0, 00:19:04.637 "state": "enabled", 00:19:04.637 "listen_address": { 00:19:04.637 "trtype": "TCP", 00:19:04.637 "adrfam": "IPv4", 00:19:04.637 "traddr": "10.0.0.2", 00:19:04.637 "trsvcid": "4420" 00:19:04.637 }, 00:19:04.637 "peer_address": { 00:19:04.637 "trtype": "TCP", 00:19:04.637 "adrfam": "IPv4", 00:19:04.637 "traddr": "10.0.0.1", 00:19:04.637 "trsvcid": "58442" 00:19:04.637 }, 00:19:04.637 "auth": { 00:19:04.637 "state": "completed", 00:19:04.637 "digest": "sha256", 00:19:04.637 "dhgroup": "ffdhe6144" 00:19:04.637 } 00:19:04.637 } 00:19:04.637 ]' 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.637 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.896 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:05.462 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.462 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:05.462 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.462 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.463 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.463 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.463 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:05.463 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:05.721 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:06.295 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.295 { 00:19:06.295 "cntlid": 35, 00:19:06.295 "qid": 0, 00:19:06.295 "state": "enabled", 00:19:06.295 "listen_address": { 00:19:06.295 "trtype": "TCP", 00:19:06.295 "adrfam": "IPv4", 00:19:06.295 "traddr": "10.0.0.2", 00:19:06.295 "trsvcid": "4420" 00:19:06.295 }, 00:19:06.295 "peer_address": { 00:19:06.295 "trtype": "TCP", 00:19:06.295 "adrfam": "IPv4", 00:19:06.295 "traddr": "10.0.0.1", 00:19:06.295 "trsvcid": "58462" 00:19:06.295 }, 00:19:06.295 "auth": { 00:19:06.295 "state": "completed", 00:19:06.295 "digest": "sha256", 00:19:06.295 "dhgroup": "ffdhe6144" 00:19:06.295 } 00:19:06.295 } 00:19:06.295 ]' 00:19:06.295 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.553 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.812 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.379 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:07.638 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:07.897 00:19:07.897 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:07.897 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.897 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.155 { 00:19:08.155 "cntlid": 37, 00:19:08.155 "qid": 0, 00:19:08.155 "state": "enabled", 00:19:08.155 "listen_address": { 00:19:08.155 "trtype": "TCP", 00:19:08.155 "adrfam": "IPv4", 00:19:08.155 "traddr": "10.0.0.2", 00:19:08.155 "trsvcid": "4420" 00:19:08.155 }, 00:19:08.155 "peer_address": { 00:19:08.155 "trtype": "TCP", 00:19:08.155 "adrfam": "IPv4", 00:19:08.155 "traddr": "10.0.0.1", 00:19:08.155 "trsvcid": "49498" 00:19:08.155 }, 00:19:08.155 "auth": { 00:19:08.155 "state": "completed", 00:19:08.155 "digest": "sha256", 00:19:08.155 "dhgroup": "ffdhe6144" 00:19:08.155 } 00:19:08.155 } 00:19:08.155 ]' 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.155 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.720 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:08.977 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.236 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.801 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:09.801 { 00:19:09.801 "cntlid": 39, 00:19:09.801 "qid": 0, 00:19:09.801 "state": "enabled", 00:19:09.801 "listen_address": { 00:19:09.801 "trtype": "TCP", 00:19:09.801 "adrfam": "IPv4", 00:19:09.801 "traddr": "10.0.0.2", 00:19:09.801 "trsvcid": "4420" 00:19:09.801 }, 00:19:09.801 "peer_address": { 00:19:09.801 "trtype": "TCP", 00:19:09.801 "adrfam": "IPv4", 00:19:09.801 "traddr": "10.0.0.1", 00:19:09.801 "trsvcid": "49526" 00:19:09.801 }, 00:19:09.801 "auth": { 00:19:09.801 "state": "completed", 00:19:09.801 "digest": "sha256", 00:19:09.801 "dhgroup": "ffdhe6144" 00:19:09.801 } 00:19:09.801 } 00:19:09.801 ]' 00:19:09.801 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.058 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.351 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:10.947 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:11.513 00:19:11.513 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.513 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.513 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:11.772 { 00:19:11.772 "cntlid": 41, 00:19:11.772 "qid": 0, 00:19:11.772 "state": "enabled", 00:19:11.772 "listen_address": { 00:19:11.772 "trtype": "TCP", 00:19:11.772 "adrfam": "IPv4", 00:19:11.772 "traddr": "10.0.0.2", 00:19:11.772 "trsvcid": "4420" 00:19:11.772 }, 00:19:11.772 "peer_address": { 00:19:11.772 "trtype": "TCP", 00:19:11.772 "adrfam": "IPv4", 00:19:11.772 "traddr": "10.0.0.1", 00:19:11.772 "trsvcid": "49548" 00:19:11.772 }, 00:19:11.772 "auth": { 00:19:11.772 "state": "completed", 00:19:11.772 "digest": "sha256", 00:19:11.772 "dhgroup": "ffdhe8192" 00:19:11.772 } 00:19:11.772 } 00:19:11.772 ]' 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.772 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:12.030 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.030 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.030 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.030 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.597 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:12.855 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:13.420 00:19:13.420 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:13.420 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.420 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:13.678 { 00:19:13.678 "cntlid": 43, 00:19:13.678 "qid": 0, 00:19:13.678 "state": "enabled", 00:19:13.678 "listen_address": { 00:19:13.678 "trtype": "TCP", 00:19:13.678 "adrfam": "IPv4", 00:19:13.678 "traddr": "10.0.0.2", 00:19:13.678 "trsvcid": "4420" 00:19:13.678 }, 00:19:13.678 "peer_address": { 00:19:13.678 "trtype": "TCP", 00:19:13.678 "adrfam": "IPv4", 00:19:13.678 "traddr": "10.0.0.1", 00:19:13.678 "trsvcid": "49566" 00:19:13.678 }, 00:19:13.678 "auth": { 00:19:13.678 "state": "completed", 00:19:13.678 "digest": "sha256", 00:19:13.678 "dhgroup": "ffdhe8192" 00:19:13.678 } 00:19:13.678 } 00:19:13.678 ]' 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.678 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:13.937 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.937 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.937 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.937 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.502 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:14.760 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:15.327 00:19:15.327 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:15.327 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:15.327 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.585 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.585 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.585 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.585 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:15.585 { 00:19:15.585 "cntlid": 45, 00:19:15.585 "qid": 0, 00:19:15.585 "state": "enabled", 00:19:15.585 "listen_address": { 00:19:15.585 "trtype": "TCP", 00:19:15.585 "adrfam": "IPv4", 00:19:15.585 "traddr": "10.0.0.2", 00:19:15.585 "trsvcid": "4420" 00:19:15.585 }, 00:19:15.585 "peer_address": { 00:19:15.585 "trtype": "TCP", 00:19:15.585 "adrfam": "IPv4", 00:19:15.585 "traddr": "10.0.0.1", 00:19:15.585 "trsvcid": "49588" 00:19:15.585 }, 00:19:15.585 "auth": { 00:19:15.585 "state": "completed", 00:19:15.585 "digest": "sha256", 00:19:15.585 "dhgroup": "ffdhe8192" 00:19:15.585 } 00:19:15.585 } 00:19:15.585 ]' 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.585 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.843 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.407 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.665 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.230 00:19:17.231 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:17.231 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.231 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:17.498 { 00:19:17.498 "cntlid": 47, 00:19:17.498 "qid": 0, 00:19:17.498 "state": "enabled", 00:19:17.498 "listen_address": { 00:19:17.498 "trtype": "TCP", 00:19:17.498 "adrfam": "IPv4", 00:19:17.498 "traddr": "10.0.0.2", 00:19:17.498 "trsvcid": "4420" 00:19:17.498 }, 00:19:17.498 "peer_address": { 00:19:17.498 "trtype": "TCP", 00:19:17.498 "adrfam": "IPv4", 00:19:17.498 "traddr": "10.0.0.1", 00:19:17.498 "trsvcid": "54110" 00:19:17.498 }, 00:19:17.498 "auth": { 00:19:17.498 "state": "completed", 00:19:17.498 "digest": "sha256", 00:19:17.498 "dhgroup": "ffdhe8192" 00:19:17.498 } 00:19:17.498 } 00:19:17.498 ]' 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.498 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.756 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.325 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.584 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.585 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.844 00:19:18.844 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:18.844 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:18.844 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.102 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.102 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.102 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:19.103 { 00:19:19.103 "cntlid": 49, 00:19:19.103 "qid": 0, 00:19:19.103 "state": "enabled", 00:19:19.103 "listen_address": { 00:19:19.103 "trtype": "TCP", 00:19:19.103 "adrfam": "IPv4", 00:19:19.103 "traddr": "10.0.0.2", 00:19:19.103 "trsvcid": "4420" 00:19:19.103 }, 00:19:19.103 "peer_address": { 00:19:19.103 "trtype": "TCP", 00:19:19.103 "adrfam": "IPv4", 00:19:19.103 "traddr": "10.0.0.1", 00:19:19.103 "trsvcid": "54128" 00:19:19.103 }, 00:19:19.103 "auth": { 00:19:19.103 "state": "completed", 00:19:19.103 "digest": "sha384", 00:19:19.103 "dhgroup": "null" 00:19:19.103 } 00:19:19.103 } 00:19:19.103 ]' 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.103 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:19.361 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:19.361 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:19.361 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.361 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.361 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.618 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.185 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:20.444 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:20.702 00:19:20.702 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:20.702 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:20.702 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:21.005 { 00:19:21.005 "cntlid": 51, 00:19:21.005 "qid": 0, 00:19:21.005 "state": "enabled", 00:19:21.005 "listen_address": { 00:19:21.005 "trtype": "TCP", 00:19:21.005 "adrfam": "IPv4", 00:19:21.005 "traddr": "10.0.0.2", 00:19:21.005 "trsvcid": "4420" 00:19:21.005 }, 00:19:21.005 "peer_address": { 00:19:21.005 "trtype": "TCP", 00:19:21.005 "adrfam": "IPv4", 00:19:21.005 "traddr": "10.0.0.1", 00:19:21.005 "trsvcid": "54160" 00:19:21.005 }, 00:19:21.005 "auth": { 00:19:21.005 "state": "completed", 00:19:21.005 "digest": "sha384", 00:19:21.005 "dhgroup": "null" 00:19:21.005 } 00:19:21.005 } 00:19:21.005 ]' 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.005 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.263 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:21.827 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.084 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.341 00:19:22.341 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.341 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.341 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:22.598 { 00:19:22.598 "cntlid": 53, 00:19:22.598 "qid": 0, 00:19:22.598 "state": "enabled", 00:19:22.598 "listen_address": { 00:19:22.598 "trtype": "TCP", 00:19:22.598 "adrfam": "IPv4", 00:19:22.598 "traddr": "10.0.0.2", 00:19:22.598 "trsvcid": "4420" 00:19:22.598 }, 00:19:22.598 "peer_address": { 00:19:22.598 "trtype": "TCP", 00:19:22.598 "adrfam": "IPv4", 00:19:22.598 "traddr": "10.0.0.1", 00:19:22.598 "trsvcid": "54178" 00:19:22.598 }, 00:19:22.598 "auth": { 00:19:22.598 "state": "completed", 00:19:22.598 "digest": "sha384", 00:19:22.598 "dhgroup": "null" 00:19:22.598 } 00:19:22.598 } 00:19:22.598 ]' 00:19:22.598 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.598 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.855 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.420 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.677 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.934 00:19:23.934 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:23.934 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.934 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.192 { 00:19:24.192 "cntlid": 55, 00:19:24.192 "qid": 0, 00:19:24.192 "state": "enabled", 00:19:24.192 "listen_address": { 00:19:24.192 "trtype": "TCP", 00:19:24.192 "adrfam": "IPv4", 00:19:24.192 "traddr": "10.0.0.2", 00:19:24.192 "trsvcid": "4420" 00:19:24.192 }, 00:19:24.192 "peer_address": { 00:19:24.192 "trtype": "TCP", 00:19:24.192 "adrfam": "IPv4", 00:19:24.192 "traddr": "10.0.0.1", 00:19:24.192 "trsvcid": "54214" 00:19:24.192 }, 00:19:24.192 "auth": { 00:19:24.192 "state": "completed", 00:19:24.192 "digest": "sha384", 00:19:24.192 "dhgroup": "null" 00:19:24.192 } 00:19:24.192 } 00:19:24.192 ]' 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.192 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.449 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:24.449 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.449 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.449 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.449 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.706 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:25.271 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:25.529 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:25.787 { 00:19:25.787 "cntlid": 57, 00:19:25.787 "qid": 0, 00:19:25.787 "state": "enabled", 00:19:25.787 "listen_address": { 00:19:25.787 "trtype": "TCP", 00:19:25.787 "adrfam": "IPv4", 00:19:25.787 "traddr": "10.0.0.2", 00:19:25.787 "trsvcid": "4420" 00:19:25.787 }, 00:19:25.787 "peer_address": { 00:19:25.787 "trtype": "TCP", 00:19:25.787 "adrfam": "IPv4", 00:19:25.787 "traddr": "10.0.0.1", 00:19:25.787 "trsvcid": "54248" 00:19:25.787 }, 00:19:25.787 "auth": { 00:19:25.787 "state": "completed", 00:19:25.787 "digest": "sha384", 00:19:25.787 "dhgroup": "ffdhe2048" 00:19:25.787 } 00:19:25.787 } 00:19:25.787 ]' 00:19:25.787 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.053 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.320 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.886 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:27.143 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:27.400 00:19:27.400 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:27.400 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:27.400 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:27.657 { 00:19:27.657 "cntlid": 59, 00:19:27.657 "qid": 0, 00:19:27.657 "state": "enabled", 00:19:27.657 "listen_address": { 00:19:27.657 "trtype": "TCP", 00:19:27.657 "adrfam": "IPv4", 00:19:27.657 "traddr": "10.0.0.2", 00:19:27.657 "trsvcid": "4420" 00:19:27.657 }, 00:19:27.657 "peer_address": { 00:19:27.657 "trtype": "TCP", 00:19:27.657 "adrfam": "IPv4", 00:19:27.657 "traddr": "10.0.0.1", 00:19:27.657 "trsvcid": "48898" 00:19:27.657 }, 00:19:27.657 "auth": { 00:19:27.657 "state": "completed", 00:19:27.657 "digest": "sha384", 00:19:27.657 "dhgroup": "ffdhe2048" 00:19:27.657 } 00:19:27.657 } 00:19:27.657 ]' 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.657 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.959 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.959 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.959 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.959 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.527 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.785 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:29.352 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.352 { 00:19:29.352 "cntlid": 61, 00:19:29.352 "qid": 0, 00:19:29.352 "state": "enabled", 00:19:29.352 "listen_address": { 00:19:29.352 "trtype": "TCP", 00:19:29.352 "adrfam": "IPv4", 00:19:29.352 "traddr": "10.0.0.2", 00:19:29.352 "trsvcid": "4420" 00:19:29.352 }, 00:19:29.352 "peer_address": { 00:19:29.352 "trtype": "TCP", 00:19:29.352 "adrfam": "IPv4", 00:19:29.352 "traddr": "10.0.0.1", 00:19:29.352 "trsvcid": "48930" 00:19:29.352 }, 00:19:29.352 "auth": { 00:19:29.352 "state": "completed", 00:19:29.352 "digest": "sha384", 00:19:29.352 "dhgroup": "ffdhe2048" 00:19:29.352 } 00:19:29.352 } 00:19:29.352 ]' 00:19:29.352 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.610 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.610 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:29.610 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.610 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:29.610 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.610 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.610 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.868 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.435 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.694 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.960 00:19:30.960 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.960 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.960 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.217 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:31.217 { 00:19:31.217 "cntlid": 63, 00:19:31.217 "qid": 0, 00:19:31.217 "state": "enabled", 00:19:31.217 "listen_address": { 00:19:31.217 "trtype": "TCP", 00:19:31.217 "adrfam": "IPv4", 00:19:31.217 "traddr": "10.0.0.2", 00:19:31.217 "trsvcid": "4420" 00:19:31.217 }, 00:19:31.217 "peer_address": { 00:19:31.217 "trtype": "TCP", 00:19:31.217 "adrfam": "IPv4", 00:19:31.217 "traddr": "10.0.0.1", 00:19:31.217 "trsvcid": "48956" 00:19:31.217 }, 00:19:31.217 "auth": { 00:19:31.217 "state": "completed", 00:19:31.217 "digest": "sha384", 00:19:31.217 "dhgroup": "ffdhe2048" 00:19:31.217 } 00:19:31.218 } 00:19:31.218 ]' 00:19:31.218 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:31.218 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.218 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:31.218 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.218 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.476 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.476 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.476 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.735 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.302 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.564 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.826 00:19:32.826 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:32.826 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.826 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:33.100 { 00:19:33.100 "cntlid": 65, 00:19:33.100 "qid": 0, 00:19:33.100 "state": "enabled", 00:19:33.100 "listen_address": { 00:19:33.100 "trtype": "TCP", 00:19:33.100 "adrfam": "IPv4", 00:19:33.100 "traddr": "10.0.0.2", 00:19:33.100 "trsvcid": "4420" 00:19:33.100 }, 00:19:33.100 "peer_address": { 00:19:33.100 "trtype": "TCP", 00:19:33.100 "adrfam": "IPv4", 00:19:33.100 "traddr": "10.0.0.1", 00:19:33.100 "trsvcid": "48998" 00:19:33.100 }, 00:19:33.100 "auth": { 00:19:33.100 "state": "completed", 00:19:33.100 "digest": "sha384", 00:19:33.100 "dhgroup": "ffdhe3072" 00:19:33.100 } 00:19:33.100 } 00:19:33.100 ]' 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.100 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.374 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:33.958 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:34.229 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:34.491 00:19:34.491 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:34.491 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.491 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:34.750 { 00:19:34.750 "cntlid": 67, 00:19:34.750 "qid": 0, 00:19:34.750 "state": "enabled", 00:19:34.750 "listen_address": { 00:19:34.750 "trtype": "TCP", 00:19:34.750 "adrfam": "IPv4", 00:19:34.750 "traddr": "10.0.0.2", 00:19:34.750 "trsvcid": "4420" 00:19:34.750 }, 00:19:34.750 "peer_address": { 00:19:34.750 "trtype": "TCP", 00:19:34.750 "adrfam": "IPv4", 00:19:34.750 "traddr": "10.0.0.1", 00:19:34.750 "trsvcid": "49026" 00:19:34.750 }, 00:19:34.750 "auth": { 00:19:34.750 "state": "completed", 00:19:34.750 "digest": "sha384", 00:19:34.750 "dhgroup": "ffdhe3072" 00:19:34.750 } 00:19:34.750 } 00:19:34.750 ]' 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:34.750 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.009 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.009 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.009 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:35.577 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.577 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:35.577 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.577 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:35.836 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.094 00:19:36.353 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:36.353 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:36.353 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:36.611 { 00:19:36.611 "cntlid": 69, 00:19:36.611 "qid": 0, 00:19:36.611 "state": "enabled", 00:19:36.611 "listen_address": { 00:19:36.611 "trtype": "TCP", 00:19:36.611 "adrfam": "IPv4", 00:19:36.611 "traddr": "10.0.0.2", 00:19:36.611 "trsvcid": "4420" 00:19:36.611 }, 00:19:36.611 "peer_address": { 00:19:36.611 "trtype": "TCP", 00:19:36.611 "adrfam": "IPv4", 00:19:36.611 "traddr": "10.0.0.1", 00:19:36.611 "trsvcid": "49056" 00:19:36.611 }, 00:19:36.611 "auth": { 00:19:36.611 "state": "completed", 00:19:36.611 "digest": "sha384", 00:19:36.611 "dhgroup": "ffdhe3072" 00:19:36.611 } 00:19:36.611 } 00:19:36.611 ]' 00:19:36.611 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.611 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.870 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.438 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.705 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.706 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.965 00:19:37.965 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:37.965 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:37.965 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:38.224 { 00:19:38.224 "cntlid": 71, 00:19:38.224 "qid": 0, 00:19:38.224 "state": "enabled", 00:19:38.224 "listen_address": { 00:19:38.224 "trtype": "TCP", 00:19:38.224 "adrfam": "IPv4", 00:19:38.224 "traddr": "10.0.0.2", 00:19:38.224 "trsvcid": "4420" 00:19:38.224 }, 00:19:38.224 "peer_address": { 00:19:38.224 "trtype": "TCP", 00:19:38.224 "adrfam": "IPv4", 00:19:38.224 "traddr": "10.0.0.1", 00:19:38.224 "trsvcid": "52724" 00:19:38.224 }, 00:19:38.224 "auth": { 00:19:38.224 "state": "completed", 00:19:38.224 "digest": "sha384", 00:19:38.224 "dhgroup": "ffdhe3072" 00:19:38.224 } 00:19:38.224 } 00:19:38.224 ]' 00:19:38.224 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.225 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.225 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.225 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.225 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:38.484 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.484 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.484 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.484 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.052 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.312 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.572 00:19:39.572 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:39.572 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:39.572 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:39.831 { 00:19:39.831 "cntlid": 73, 00:19:39.831 "qid": 0, 00:19:39.831 "state": "enabled", 00:19:39.831 "listen_address": { 00:19:39.831 "trtype": "TCP", 00:19:39.831 "adrfam": "IPv4", 00:19:39.831 "traddr": "10.0.0.2", 00:19:39.831 "trsvcid": "4420" 00:19:39.831 }, 00:19:39.831 "peer_address": { 00:19:39.831 "trtype": "TCP", 00:19:39.831 "adrfam": "IPv4", 00:19:39.831 "traddr": "10.0.0.1", 00:19:39.831 "trsvcid": "52742" 00:19:39.831 }, 00:19:39.831 "auth": { 00:19:39.831 "state": "completed", 00:19:39.831 "digest": "sha384", 00:19:39.831 "dhgroup": "ffdhe4096" 00:19:39.831 } 00:19:39.831 } 00:19:39.831 ]' 00:19:39.831 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.090 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.349 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.918 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:41.177 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:41.437 00:19:41.437 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:41.437 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.437 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:41.696 { 00:19:41.696 "cntlid": 75, 00:19:41.696 "qid": 0, 00:19:41.696 "state": "enabled", 00:19:41.696 "listen_address": { 00:19:41.696 "trtype": "TCP", 00:19:41.696 "adrfam": "IPv4", 00:19:41.696 "traddr": "10.0.0.2", 00:19:41.696 "trsvcid": "4420" 00:19:41.696 }, 00:19:41.696 "peer_address": { 00:19:41.696 "trtype": "TCP", 00:19:41.696 "adrfam": "IPv4", 00:19:41.696 "traddr": "10.0.0.1", 00:19:41.696 "trsvcid": "52762" 00:19:41.696 }, 00:19:41.696 "auth": { 00:19:41.696 "state": "completed", 00:19:41.696 "digest": "sha384", 00:19:41.696 "dhgroup": "ffdhe4096" 00:19:41.696 } 00:19:41.696 } 00:19:41.696 ]' 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.696 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.955 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.955 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.955 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.955 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:42.523 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.524 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:42.783 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:43.042 00:19:43.042 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:43.042 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:43.042 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:43.301 { 00:19:43.301 "cntlid": 77, 00:19:43.301 "qid": 0, 00:19:43.301 "state": "enabled", 00:19:43.301 "listen_address": { 00:19:43.301 "trtype": "TCP", 00:19:43.301 "adrfam": "IPv4", 00:19:43.301 "traddr": "10.0.0.2", 00:19:43.301 "trsvcid": "4420" 00:19:43.301 }, 00:19:43.301 "peer_address": { 00:19:43.301 "trtype": "TCP", 00:19:43.301 "adrfam": "IPv4", 00:19:43.301 "traddr": "10.0.0.1", 00:19:43.301 "trsvcid": "52784" 00:19:43.301 }, 00:19:43.301 "auth": { 00:19:43.301 "state": "completed", 00:19:43.301 "digest": "sha384", 00:19:43.301 "dhgroup": "ffdhe4096" 00:19:43.301 } 00:19:43.301 } 00:19:43.301 ]' 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.301 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:43.560 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.560 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:43.560 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.560 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.560 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.819 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.386 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.644 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.645 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.903 00:19:44.903 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:44.903 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.903 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:45.162 { 00:19:45.162 "cntlid": 79, 00:19:45.162 "qid": 0, 00:19:45.162 "state": "enabled", 00:19:45.162 "listen_address": { 00:19:45.162 "trtype": "TCP", 00:19:45.162 "adrfam": "IPv4", 00:19:45.162 "traddr": "10.0.0.2", 00:19:45.162 "trsvcid": "4420" 00:19:45.162 }, 00:19:45.162 "peer_address": { 00:19:45.162 "trtype": "TCP", 00:19:45.162 "adrfam": "IPv4", 00:19:45.162 "traddr": "10.0.0.1", 00:19:45.162 "trsvcid": "52812" 00:19:45.162 }, 00:19:45.162 "auth": { 00:19:45.162 "state": "completed", 00:19:45.162 "digest": "sha384", 00:19:45.162 "dhgroup": "ffdhe4096" 00:19:45.162 } 00:19:45.162 } 00:19:45.162 ]' 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.162 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.420 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.988 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:46.248 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:46.817 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:46.817 { 00:19:46.817 "cntlid": 81, 00:19:46.817 "qid": 0, 00:19:46.817 "state": "enabled", 00:19:46.817 "listen_address": { 00:19:46.817 "trtype": "TCP", 00:19:46.817 "adrfam": "IPv4", 00:19:46.817 "traddr": "10.0.0.2", 00:19:46.817 "trsvcid": "4420" 00:19:46.817 }, 00:19:46.817 "peer_address": { 00:19:46.817 "trtype": "TCP", 00:19:46.817 "adrfam": "IPv4", 00:19:46.817 "traddr": "10.0.0.1", 00:19:46.817 "trsvcid": "50990" 00:19:46.817 }, 00:19:46.817 "auth": { 00:19:46.817 "state": "completed", 00:19:46.817 "digest": "sha384", 00:19:46.817 "dhgroup": "ffdhe6144" 00:19:46.817 } 00:19:46.817 } 00:19:46.817 ]' 00:19:46.817 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.075 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.334 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.901 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:48.161 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:48.420 00:19:48.420 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:48.420 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:48.420 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:48.679 { 00:19:48.679 "cntlid": 83, 00:19:48.679 "qid": 0, 00:19:48.679 "state": "enabled", 00:19:48.679 "listen_address": { 00:19:48.679 "trtype": "TCP", 00:19:48.679 "adrfam": "IPv4", 00:19:48.679 "traddr": "10.0.0.2", 00:19:48.679 "trsvcid": "4420" 00:19:48.679 }, 00:19:48.679 "peer_address": { 00:19:48.679 "trtype": "TCP", 00:19:48.679 "adrfam": "IPv4", 00:19:48.679 "traddr": "10.0.0.1", 00:19:48.679 "trsvcid": "51014" 00:19:48.679 }, 00:19:48.679 "auth": { 00:19:48.679 "state": "completed", 00:19:48.679 "digest": "sha384", 00:19:48.679 "dhgroup": "ffdhe6144" 00:19:48.679 } 00:19:48.679 } 00:19:48.679 ]' 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.679 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:48.938 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.938 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.938 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.938 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.505 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:49.764 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:50.332 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.332 { 00:19:50.332 "cntlid": 85, 00:19:50.332 "qid": 0, 00:19:50.332 "state": "enabled", 00:19:50.332 "listen_address": { 00:19:50.332 "trtype": "TCP", 00:19:50.332 "adrfam": "IPv4", 00:19:50.332 "traddr": "10.0.0.2", 00:19:50.332 "trsvcid": "4420" 00:19:50.332 }, 00:19:50.332 "peer_address": { 00:19:50.332 "trtype": "TCP", 00:19:50.332 "adrfam": "IPv4", 00:19:50.332 "traddr": "10.0.0.1", 00:19:50.332 "trsvcid": "51046" 00:19:50.332 }, 00:19:50.332 "auth": { 00:19:50.332 "state": "completed", 00:19:50.332 "digest": "sha384", 00:19:50.332 "dhgroup": "ffdhe6144" 00:19:50.332 } 00:19:50.332 } 00:19:50.332 ]' 00:19:50.332 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.590 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.590 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:50.590 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.590 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:50.590 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.590 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.590 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.848 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.414 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.673 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.932 00:19:51.932 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:51.932 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.932 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.190 { 00:19:52.190 "cntlid": 87, 00:19:52.190 "qid": 0, 00:19:52.190 "state": "enabled", 00:19:52.190 "listen_address": { 00:19:52.190 "trtype": "TCP", 00:19:52.190 "adrfam": "IPv4", 00:19:52.190 "traddr": "10.0.0.2", 00:19:52.190 "trsvcid": "4420" 00:19:52.190 }, 00:19:52.190 "peer_address": { 00:19:52.190 "trtype": "TCP", 00:19:52.190 "adrfam": "IPv4", 00:19:52.190 "traddr": "10.0.0.1", 00:19:52.190 "trsvcid": "51088" 00:19:52.190 }, 00:19:52.190 "auth": { 00:19:52.190 "state": "completed", 00:19:52.190 "digest": "sha384", 00:19:52.190 "dhgroup": "ffdhe6144" 00:19:52.190 } 00:19:52.190 } 00:19:52.190 ]' 00:19:52.190 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.449 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.708 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.275 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:53.533 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:54.100 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.100 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.358 { 00:19:54.358 "cntlid": 89, 00:19:54.358 "qid": 0, 00:19:54.358 "state": "enabled", 00:19:54.358 "listen_address": { 00:19:54.358 "trtype": "TCP", 00:19:54.358 "adrfam": "IPv4", 00:19:54.358 "traddr": "10.0.0.2", 00:19:54.358 "trsvcid": "4420" 00:19:54.358 }, 00:19:54.358 "peer_address": { 00:19:54.358 "trtype": "TCP", 00:19:54.358 "adrfam": "IPv4", 00:19:54.358 "traddr": "10.0.0.1", 00:19:54.358 "trsvcid": "51112" 00:19:54.358 }, 00:19:54.358 "auth": { 00:19:54.358 "state": "completed", 00:19:54.358 "digest": "sha384", 00:19:54.358 "dhgroup": "ffdhe8192" 00:19:54.358 } 00:19:54.358 } 00:19:54.358 ]' 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.358 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.616 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:19:55.182 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.182 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.183 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:55.441 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:56.008 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.008 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.266 { 00:19:56.266 "cntlid": 91, 00:19:56.266 "qid": 0, 00:19:56.266 "state": "enabled", 00:19:56.266 "listen_address": { 00:19:56.266 "trtype": "TCP", 00:19:56.266 "adrfam": "IPv4", 00:19:56.266 "traddr": "10.0.0.2", 00:19:56.266 "trsvcid": "4420" 00:19:56.266 }, 00:19:56.266 "peer_address": { 00:19:56.266 "trtype": "TCP", 00:19:56.266 "adrfam": "IPv4", 00:19:56.266 "traddr": "10.0.0.1", 00:19:56.266 "trsvcid": "51144" 00:19:56.266 }, 00:19:56.266 "auth": { 00:19:56.266 "state": "completed", 00:19:56.266 "digest": "sha384", 00:19:56.266 "dhgroup": "ffdhe8192" 00:19:56.266 } 00:19:56.266 } 00:19:56.266 ]' 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.266 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.524 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.089 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.090 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:57.347 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:57.911 00:19:57.911 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:57.911 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:57.911 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.169 { 00:19:58.169 "cntlid": 93, 00:19:58.169 "qid": 0, 00:19:58.169 "state": "enabled", 00:19:58.169 "listen_address": { 00:19:58.169 "trtype": "TCP", 00:19:58.169 "adrfam": "IPv4", 00:19:58.169 "traddr": "10.0.0.2", 00:19:58.169 "trsvcid": "4420" 00:19:58.169 }, 00:19:58.169 "peer_address": { 00:19:58.169 "trtype": "TCP", 00:19:58.169 "adrfam": "IPv4", 00:19:58.169 "traddr": "10.0.0.1", 00:19:58.169 "trsvcid": "37662" 00:19:58.169 }, 00:19:58.169 "auth": { 00:19:58.169 "state": "completed", 00:19:58.169 "digest": "sha384", 00:19:58.169 "dhgroup": "ffdhe8192" 00:19:58.169 } 00:19:58.169 } 00:19:58.169 ]' 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.169 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.428 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.363 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:19:59.364 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.364 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.364 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.364 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.364 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.931 00:19:59.931 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:59.931 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:59.931 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.190 { 00:20:00.190 "cntlid": 95, 00:20:00.190 "qid": 0, 00:20:00.190 "state": "enabled", 00:20:00.190 "listen_address": { 00:20:00.190 "trtype": "TCP", 00:20:00.190 "adrfam": "IPv4", 00:20:00.190 "traddr": "10.0.0.2", 00:20:00.190 "trsvcid": "4420" 00:20:00.190 }, 00:20:00.190 "peer_address": { 00:20:00.190 "trtype": "TCP", 00:20:00.190 "adrfam": "IPv4", 00:20:00.190 "traddr": "10.0.0.1", 00:20:00.190 "trsvcid": "37688" 00:20:00.190 }, 00:20:00.190 "auth": { 00:20:00.190 "state": "completed", 00:20:00.190 "digest": "sha384", 00:20:00.190 "dhgroup": "ffdhe8192" 00:20:00.190 } 00:20:00.190 } 00:20:00.190 ]' 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.190 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.449 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.020 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:01.278 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:01.537 00:20:01.537 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:01.537 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:01.537 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.795 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:01.795 { 00:20:01.795 "cntlid": 97, 00:20:01.795 "qid": 0, 00:20:01.795 "state": "enabled", 00:20:01.796 "listen_address": { 00:20:01.796 "trtype": "TCP", 00:20:01.796 "adrfam": "IPv4", 00:20:01.796 "traddr": "10.0.0.2", 00:20:01.796 "trsvcid": "4420" 00:20:01.796 }, 00:20:01.796 "peer_address": { 00:20:01.796 "trtype": "TCP", 00:20:01.796 "adrfam": "IPv4", 00:20:01.796 "traddr": "10.0.0.1", 00:20:01.796 "trsvcid": "37696" 00:20:01.796 }, 00:20:01.796 "auth": { 00:20:01.796 "state": "completed", 00:20:01.796 "digest": "sha512", 00:20:01.796 "dhgroup": "null" 00:20:01.796 } 00:20:01.796 } 00:20:01.796 ]' 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.796 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.054 13:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:02.621 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:02.879 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:03.137 00:20:03.137 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:03.137 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.137 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:03.396 { 00:20:03.396 "cntlid": 99, 00:20:03.396 "qid": 0, 00:20:03.396 "state": "enabled", 00:20:03.396 "listen_address": { 00:20:03.396 "trtype": "TCP", 00:20:03.396 "adrfam": "IPv4", 00:20:03.396 "traddr": "10.0.0.2", 00:20:03.396 "trsvcid": "4420" 00:20:03.396 }, 00:20:03.396 "peer_address": { 00:20:03.396 "trtype": "TCP", 00:20:03.396 "adrfam": "IPv4", 00:20:03.396 "traddr": "10.0.0.1", 00:20:03.396 "trsvcid": "37726" 00:20:03.396 }, 00:20:03.396 "auth": { 00:20:03.396 "state": "completed", 00:20:03.396 "digest": "sha512", 00:20:03.396 "dhgroup": "null" 00:20:03.396 } 00:20:03.396 } 00:20:03.396 ]' 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.396 13:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.658 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:04.224 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.483 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.741 00:20:04.741 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.741 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:04.741 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:05.006 { 00:20:05.006 "cntlid": 101, 00:20:05.006 "qid": 0, 00:20:05.006 "state": "enabled", 00:20:05.006 "listen_address": { 00:20:05.006 "trtype": "TCP", 00:20:05.006 "adrfam": "IPv4", 00:20:05.006 "traddr": "10.0.0.2", 00:20:05.006 "trsvcid": "4420" 00:20:05.006 }, 00:20:05.006 "peer_address": { 00:20:05.006 "trtype": "TCP", 00:20:05.006 "adrfam": "IPv4", 00:20:05.006 "traddr": "10.0.0.1", 00:20:05.006 "trsvcid": "37764" 00:20:05.006 }, 00:20:05.006 "auth": { 00:20:05.006 "state": "completed", 00:20:05.006 "digest": "sha512", 00:20:05.006 "dhgroup": "null" 00:20:05.006 } 00:20:05.006 } 00:20:05.006 ]' 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.006 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.271 13:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.836 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.094 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.353 00:20:06.611 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.611 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.611 13:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:06.611 { 00:20:06.611 "cntlid": 103, 00:20:06.611 "qid": 0, 00:20:06.611 "state": "enabled", 00:20:06.611 "listen_address": { 00:20:06.611 "trtype": "TCP", 00:20:06.611 "adrfam": "IPv4", 00:20:06.611 "traddr": "10.0.0.2", 00:20:06.611 "trsvcid": "4420" 00:20:06.611 }, 00:20:06.611 "peer_address": { 00:20:06.611 "trtype": "TCP", 00:20:06.611 "adrfam": "IPv4", 00:20:06.611 "traddr": "10.0.0.1", 00:20:06.611 "trsvcid": "54380" 00:20:06.611 }, 00:20:06.611 "auth": { 00:20:06.611 "state": "completed", 00:20:06.611 "digest": "sha512", 00:20:06.611 "dhgroup": "null" 00:20:06.611 } 00:20:06.611 } 00:20:06.611 ]' 00:20:06.611 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.869 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.127 13:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.693 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:07.951 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.209 00:20:08.209 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:08.209 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:08.209 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:08.467 { 00:20:08.467 "cntlid": 105, 00:20:08.467 "qid": 0, 00:20:08.467 "state": "enabled", 00:20:08.467 "listen_address": { 00:20:08.467 "trtype": "TCP", 00:20:08.467 "adrfam": "IPv4", 00:20:08.467 "traddr": "10.0.0.2", 00:20:08.467 "trsvcid": "4420" 00:20:08.467 }, 00:20:08.467 "peer_address": { 00:20:08.467 "trtype": "TCP", 00:20:08.467 "adrfam": "IPv4", 00:20:08.467 "traddr": "10.0.0.1", 00:20:08.467 "trsvcid": "54416" 00:20:08.467 }, 00:20:08.467 "auth": { 00:20:08.467 "state": "completed", 00:20:08.467 "digest": "sha512", 00:20:08.467 "dhgroup": "ffdhe2048" 00:20:08.467 } 00:20:08.467 } 00:20:08.467 ]' 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.467 13:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.725 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.293 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:09.552 13:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:09.810 00:20:09.810 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.810 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.810 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.075 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:10.075 { 00:20:10.075 "cntlid": 107, 00:20:10.075 "qid": 0, 00:20:10.075 "state": "enabled", 00:20:10.075 "listen_address": { 00:20:10.075 "trtype": "TCP", 00:20:10.075 "adrfam": "IPv4", 00:20:10.075 "traddr": "10.0.0.2", 00:20:10.075 "trsvcid": "4420" 00:20:10.075 }, 00:20:10.076 "peer_address": { 00:20:10.076 "trtype": "TCP", 00:20:10.076 "adrfam": "IPv4", 00:20:10.076 "traddr": "10.0.0.1", 00:20:10.076 "trsvcid": "54442" 00:20:10.076 }, 00:20:10.076 "auth": { 00:20:10.076 "state": "completed", 00:20:10.076 "digest": "sha512", 00:20:10.076 "dhgroup": "ffdhe2048" 00:20:10.076 } 00:20:10.076 } 00:20:10.076 ]' 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.076 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.354 13:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.987 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.245 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.504 00:20:11.504 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:11.504 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.504 13:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.762 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.762 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.763 { 00:20:11.763 "cntlid": 109, 00:20:11.763 "qid": 0, 00:20:11.763 "state": "enabled", 00:20:11.763 "listen_address": { 00:20:11.763 "trtype": "TCP", 00:20:11.763 "adrfam": "IPv4", 00:20:11.763 "traddr": "10.0.0.2", 00:20:11.763 "trsvcid": "4420" 00:20:11.763 }, 00:20:11.763 "peer_address": { 00:20:11.763 "trtype": "TCP", 00:20:11.763 "adrfam": "IPv4", 00:20:11.763 "traddr": "10.0.0.1", 00:20:11.763 "trsvcid": "54466" 00:20:11.763 }, 00:20:11.763 "auth": { 00:20:11.763 "state": "completed", 00:20:11.763 "digest": "sha512", 00:20:11.763 "dhgroup": "ffdhe2048" 00:20:11.763 } 00:20:11.763 } 00:20:11.763 ]' 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.763 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.021 13:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:12.588 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.847 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.106 00:20:13.106 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.106 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:13.106 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:13.365 { 00:20:13.365 "cntlid": 111, 00:20:13.365 "qid": 0, 00:20:13.365 "state": "enabled", 00:20:13.365 "listen_address": { 00:20:13.365 "trtype": "TCP", 00:20:13.365 "adrfam": "IPv4", 00:20:13.365 "traddr": "10.0.0.2", 00:20:13.365 "trsvcid": "4420" 00:20:13.365 }, 00:20:13.365 "peer_address": { 00:20:13.365 "trtype": "TCP", 00:20:13.365 "adrfam": "IPv4", 00:20:13.365 "traddr": "10.0.0.1", 00:20:13.365 "trsvcid": "54502" 00:20:13.365 }, 00:20:13.365 "auth": { 00:20:13.365 "state": "completed", 00:20:13.365 "digest": "sha512", 00:20:13.365 "dhgroup": "ffdhe2048" 00:20:13.365 } 00:20:13.365 } 00:20:13.365 ]' 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.365 13:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.662 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.250 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:14.515 13:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:14.774 00:20:14.774 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.774 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.774 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.038 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:15.039 { 00:20:15.039 "cntlid": 113, 00:20:15.039 "qid": 0, 00:20:15.039 "state": "enabled", 00:20:15.039 "listen_address": { 00:20:15.039 "trtype": "TCP", 00:20:15.039 "adrfam": "IPv4", 00:20:15.039 "traddr": "10.0.0.2", 00:20:15.039 "trsvcid": "4420" 00:20:15.039 }, 00:20:15.039 "peer_address": { 00:20:15.039 "trtype": "TCP", 00:20:15.039 "adrfam": "IPv4", 00:20:15.039 "traddr": "10.0.0.1", 00:20:15.039 "trsvcid": "54528" 00:20:15.039 }, 00:20:15.039 "auth": { 00:20:15.039 "state": "completed", 00:20:15.039 "digest": "sha512", 00:20:15.039 "dhgroup": "ffdhe3072" 00:20:15.039 } 00:20:15.039 } 00:20:15.039 ]' 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.039 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.301 13:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.867 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:16.126 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:16.385 00:20:16.385 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:16.385 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:16.385 13:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.644 { 00:20:16.644 "cntlid": 115, 00:20:16.644 "qid": 0, 00:20:16.644 "state": "enabled", 00:20:16.644 "listen_address": { 00:20:16.644 "trtype": "TCP", 00:20:16.644 "adrfam": "IPv4", 00:20:16.644 "traddr": "10.0.0.2", 00:20:16.644 "trsvcid": "4420" 00:20:16.644 }, 00:20:16.644 "peer_address": { 00:20:16.644 "trtype": "TCP", 00:20:16.644 "adrfam": "IPv4", 00:20:16.644 "traddr": "10.0.0.1", 00:20:16.644 "trsvcid": "54568" 00:20:16.644 }, 00:20:16.644 "auth": { 00:20:16.644 "state": "completed", 00:20:16.644 "digest": "sha512", 00:20:16.644 "dhgroup": "ffdhe3072" 00:20:16.644 } 00:20:16.644 } 00:20:16.644 ]' 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.644 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.903 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.472 13:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:17.731 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:17.990 00:20:17.990 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:17.990 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:17.990 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:18.249 { 00:20:18.249 "cntlid": 117, 00:20:18.249 "qid": 0, 00:20:18.249 "state": "enabled", 00:20:18.249 "listen_address": { 00:20:18.249 "trtype": "TCP", 00:20:18.249 "adrfam": "IPv4", 00:20:18.249 "traddr": "10.0.0.2", 00:20:18.249 "trsvcid": "4420" 00:20:18.249 }, 00:20:18.249 "peer_address": { 00:20:18.249 "trtype": "TCP", 00:20:18.249 "adrfam": "IPv4", 00:20:18.249 "traddr": "10.0.0.1", 00:20:18.249 "trsvcid": "58276" 00:20:18.249 }, 00:20:18.249 "auth": { 00:20:18.249 "state": "completed", 00:20:18.249 "digest": "sha512", 00:20:18.249 "dhgroup": "ffdhe3072" 00:20:18.249 } 00:20:18.249 } 00:20:18.249 ]' 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.249 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:18.508 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.508 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:18.508 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.508 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.508 13:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.768 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.401 13:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.660 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.920 00:20:19.920 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:19.920 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:19.920 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:20.178 { 00:20:20.178 "cntlid": 119, 00:20:20.178 "qid": 0, 00:20:20.178 "state": "enabled", 00:20:20.178 "listen_address": { 00:20:20.178 "trtype": "TCP", 00:20:20.178 "adrfam": "IPv4", 00:20:20.178 "traddr": "10.0.0.2", 00:20:20.178 "trsvcid": "4420" 00:20:20.178 }, 00:20:20.178 "peer_address": { 00:20:20.178 "trtype": "TCP", 00:20:20.178 "adrfam": "IPv4", 00:20:20.178 "traddr": "10.0.0.1", 00:20:20.178 "trsvcid": "58288" 00:20:20.178 }, 00:20:20.178 "auth": { 00:20:20.178 "state": "completed", 00:20:20.178 "digest": "sha512", 00:20:20.178 "dhgroup": "ffdhe3072" 00:20:20.178 } 00:20:20.178 } 00:20:20.178 ]' 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.178 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.436 13:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.003 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:21.261 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:21.520 00:20:21.520 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:21.520 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.520 13:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.779 { 00:20:21.779 "cntlid": 121, 00:20:21.779 "qid": 0, 00:20:21.779 "state": "enabled", 00:20:21.779 "listen_address": { 00:20:21.779 "trtype": "TCP", 00:20:21.779 "adrfam": "IPv4", 00:20:21.779 "traddr": "10.0.0.2", 00:20:21.779 "trsvcid": "4420" 00:20:21.779 }, 00:20:21.779 "peer_address": { 00:20:21.779 "trtype": "TCP", 00:20:21.779 "adrfam": "IPv4", 00:20:21.779 "traddr": "10.0.0.1", 00:20:21.779 "trsvcid": "58312" 00:20:21.779 }, 00:20:21.779 "auth": { 00:20:21.779 "state": "completed", 00:20:21.779 "digest": "sha512", 00:20:21.779 "dhgroup": "ffdhe4096" 00:20:21.779 } 00:20:21.779 } 00:20:21.779 ]' 00:20:21.779 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.780 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.038 13:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.606 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:22.866 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:23.433 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.433 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.693 13:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.693 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:23.693 { 00:20:23.693 "cntlid": 123, 00:20:23.693 "qid": 0, 00:20:23.693 "state": "enabled", 00:20:23.693 "listen_address": { 00:20:23.693 "trtype": "TCP", 00:20:23.693 "adrfam": "IPv4", 00:20:23.693 "traddr": "10.0.0.2", 00:20:23.693 "trsvcid": "4420" 00:20:23.693 }, 00:20:23.693 "peer_address": { 00:20:23.693 "trtype": "TCP", 00:20:23.693 "adrfam": "IPv4", 00:20:23.693 "traddr": "10.0.0.1", 00:20:23.693 "trsvcid": "58344" 00:20:23.693 }, 00:20:23.693 "auth": { 00:20:23.693 "state": "completed", 00:20:23.693 "digest": "sha512", 00:20:23.693 "dhgroup": "ffdhe4096" 00:20:23.693 } 00:20:23.693 } 00:20:23.693 ]' 00:20:23.693 13:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.693 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.952 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.519 13:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.779 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:25.039 00:20:25.039 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:25.039 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:25.039 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.298 { 00:20:25.298 "cntlid": 125, 00:20:25.298 "qid": 0, 00:20:25.298 "state": "enabled", 00:20:25.298 "listen_address": { 00:20:25.298 "trtype": "TCP", 00:20:25.298 "adrfam": "IPv4", 00:20:25.298 "traddr": "10.0.0.2", 00:20:25.298 "trsvcid": "4420" 00:20:25.298 }, 00:20:25.298 "peer_address": { 00:20:25.298 "trtype": "TCP", 00:20:25.298 "adrfam": "IPv4", 00:20:25.298 "traddr": "10.0.0.1", 00:20:25.298 "trsvcid": "58380" 00:20:25.298 }, 00:20:25.298 "auth": { 00:20:25.298 "state": "completed", 00:20:25.298 "digest": "sha512", 00:20:25.298 "dhgroup": "ffdhe4096" 00:20:25.298 } 00:20:25.298 } 00:20:25.298 ]' 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.298 13:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.557 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.126 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.385 13:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.643 00:20:26.643 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:26.643 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.643 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:26.902 { 00:20:26.902 "cntlid": 127, 00:20:26.902 "qid": 0, 00:20:26.902 "state": "enabled", 00:20:26.902 "listen_address": { 00:20:26.902 "trtype": "TCP", 00:20:26.902 "adrfam": "IPv4", 00:20:26.902 "traddr": "10.0.0.2", 00:20:26.902 "trsvcid": "4420" 00:20:26.902 }, 00:20:26.902 "peer_address": { 00:20:26.902 "trtype": "TCP", 00:20:26.902 "adrfam": "IPv4", 00:20:26.902 "traddr": "10.0.0.1", 00:20:26.902 "trsvcid": "46438" 00:20:26.902 }, 00:20:26.902 "auth": { 00:20:26.902 "state": "completed", 00:20:26.902 "digest": "sha512", 00:20:26.902 "dhgroup": "ffdhe4096" 00:20:26.902 } 00:20:26.902 } 00:20:26.902 ]' 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.902 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.161 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.161 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.161 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.161 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.161 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.420 13:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:27.988 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.989 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:28.557 00:20:28.557 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:28.557 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:28.557 13:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:28.816 { 00:20:28.816 "cntlid": 129, 00:20:28.816 "qid": 0, 00:20:28.816 "state": "enabled", 00:20:28.816 "listen_address": { 00:20:28.816 "trtype": "TCP", 00:20:28.816 "adrfam": "IPv4", 00:20:28.816 "traddr": "10.0.0.2", 00:20:28.816 "trsvcid": "4420" 00:20:28.816 }, 00:20:28.816 "peer_address": { 00:20:28.816 "trtype": "TCP", 00:20:28.816 "adrfam": "IPv4", 00:20:28.816 "traddr": "10.0.0.1", 00:20:28.816 "trsvcid": "46472" 00:20:28.816 }, 00:20:28.816 "auth": { 00:20:28.816 "state": "completed", 00:20:28.816 "digest": "sha512", 00:20:28.816 "dhgroup": "ffdhe6144" 00:20:28.816 } 00:20:28.816 } 00:20:28.816 ]' 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.816 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.075 13:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.643 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:29.902 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:30.195 00:20:30.195 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:30.195 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.195 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:30.455 { 00:20:30.455 "cntlid": 131, 00:20:30.455 "qid": 0, 00:20:30.455 "state": "enabled", 00:20:30.455 "listen_address": { 00:20:30.455 "trtype": "TCP", 00:20:30.455 "adrfam": "IPv4", 00:20:30.455 "traddr": "10.0.0.2", 00:20:30.455 "trsvcid": "4420" 00:20:30.455 }, 00:20:30.455 "peer_address": { 00:20:30.455 "trtype": "TCP", 00:20:30.455 "adrfam": "IPv4", 00:20:30.455 "traddr": "10.0.0.1", 00:20:30.455 "trsvcid": "46488" 00:20:30.455 }, 00:20:30.455 "auth": { 00:20:30.455 "state": "completed", 00:20:30.455 "digest": "sha512", 00:20:30.455 "dhgroup": "ffdhe6144" 00:20:30.455 } 00:20:30.455 } 00:20:30.455 ]' 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:30.455 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.456 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.456 13:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.715 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.283 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.543 13:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.802 00:20:31.802 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.802 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.802 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.063 { 00:20:32.063 "cntlid": 133, 00:20:32.063 "qid": 0, 00:20:32.063 "state": "enabled", 00:20:32.063 "listen_address": { 00:20:32.063 "trtype": "TCP", 00:20:32.063 "adrfam": "IPv4", 00:20:32.063 "traddr": "10.0.0.2", 00:20:32.063 "trsvcid": "4420" 00:20:32.063 }, 00:20:32.063 "peer_address": { 00:20:32.063 "trtype": "TCP", 00:20:32.063 "adrfam": "IPv4", 00:20:32.063 "traddr": "10.0.0.1", 00:20:32.063 "trsvcid": "46512" 00:20:32.063 }, 00:20:32.063 "auth": { 00:20:32.063 "state": "completed", 00:20:32.063 "digest": "sha512", 00:20:32.063 "dhgroup": "ffdhe6144" 00:20:32.063 } 00:20:32.063 } 00:20:32.063 ]' 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.063 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:32.322 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.322 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:32.322 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.322 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.322 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.581 13:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.148 13:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.715 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.715 13:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:33.974 { 00:20:33.974 "cntlid": 135, 00:20:33.974 "qid": 0, 00:20:33.974 "state": "enabled", 00:20:33.974 "listen_address": { 00:20:33.974 "trtype": "TCP", 00:20:33.974 "adrfam": "IPv4", 00:20:33.974 "traddr": "10.0.0.2", 00:20:33.974 "trsvcid": "4420" 00:20:33.974 }, 00:20:33.974 "peer_address": { 00:20:33.974 "trtype": "TCP", 00:20:33.974 "adrfam": "IPv4", 00:20:33.974 "traddr": "10.0.0.1", 00:20:33.974 "trsvcid": "46542" 00:20:33.974 }, 00:20:33.974 "auth": { 00:20:33.974 "state": "completed", 00:20:33.974 "digest": "sha512", 00:20:33.974 "dhgroup": "ffdhe6144" 00:20:33.974 } 00:20:33.974 } 00:20:33.974 ]' 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.974 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.975 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.975 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.975 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.234 13:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.800 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:35.059 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:35.627 00:20:35.627 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.627 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.627 13:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:35.627 { 00:20:35.627 "cntlid": 137, 00:20:35.627 "qid": 0, 00:20:35.627 "state": "enabled", 00:20:35.627 "listen_address": { 00:20:35.627 "trtype": "TCP", 00:20:35.627 "adrfam": "IPv4", 00:20:35.627 "traddr": "10.0.0.2", 00:20:35.627 "trsvcid": "4420" 00:20:35.627 }, 00:20:35.627 "peer_address": { 00:20:35.627 "trtype": "TCP", 00:20:35.627 "adrfam": "IPv4", 00:20:35.627 "traddr": "10.0.0.1", 00:20:35.627 "trsvcid": "46570" 00:20:35.627 }, 00:20:35.627 "auth": { 00:20:35.627 "state": "completed", 00:20:35.627 "digest": "sha512", 00:20:35.627 "dhgroup": "ffdhe8192" 00:20:35.627 } 00:20:35.627 } 00:20:35.627 ]' 00:20:35.627 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.887 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.151 13:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:36.739 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:37.330 00:20:37.330 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:37.330 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.330 13:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:37.604 { 00:20:37.604 "cntlid": 139, 00:20:37.604 "qid": 0, 00:20:37.604 "state": "enabled", 00:20:37.604 "listen_address": { 00:20:37.604 "trtype": "TCP", 00:20:37.604 "adrfam": "IPv4", 00:20:37.604 "traddr": "10.0.0.2", 00:20:37.604 "trsvcid": "4420" 00:20:37.604 }, 00:20:37.604 "peer_address": { 00:20:37.604 "trtype": "TCP", 00:20:37.604 "adrfam": "IPv4", 00:20:37.604 "traddr": "10.0.0.1", 00:20:37.604 "trsvcid": "47620" 00:20:37.604 }, 00:20:37.604 "auth": { 00:20:37.604 "state": "completed", 00:20:37.604 "digest": "sha512", 00:20:37.604 "dhgroup": "ffdhe8192" 00:20:37.604 } 00:20:37.604 } 00:20:37.604 ]' 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:37.604 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.605 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.605 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.605 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.866 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.866 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.866 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.866 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:01:OWZiYTQ2N2Q5NjAzOWM2Mjk5ZjdlZTQ5NTg3Yjg2MTlC+6MS: 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.433 13:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key2 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:38.692 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:39.259 00:20:39.259 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:39.259 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.259 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.518 { 00:20:39.518 "cntlid": 141, 00:20:39.518 "qid": 0, 00:20:39.518 "state": "enabled", 00:20:39.518 "listen_address": { 00:20:39.518 "trtype": "TCP", 00:20:39.518 "adrfam": "IPv4", 00:20:39.518 "traddr": "10.0.0.2", 00:20:39.518 "trsvcid": "4420" 00:20:39.518 }, 00:20:39.518 "peer_address": { 00:20:39.518 "trtype": "TCP", 00:20:39.518 "adrfam": "IPv4", 00:20:39.518 "traddr": "10.0.0.1", 00:20:39.518 "trsvcid": "47646" 00:20:39.518 }, 00:20:39.518 "auth": { 00:20:39.518 "state": "completed", 00:20:39.518 "digest": "sha512", 00:20:39.518 "dhgroup": "ffdhe8192" 00:20:39.518 } 00:20:39.518 } 00:20:39.518 ]' 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.518 13:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.518 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.518 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.518 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.518 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.518 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.779 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:02:YTI3ZjBkNGJmNzE2NDhkMzJhNjlmZTY4ZTJkMjA4Yzk4ZmI4MTVmZDc4OWQ5NjllOPAJwA==: 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.356 13:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key3 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.615 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.184 00:20:41.184 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.184 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.184 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.184 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.444 { 00:20:41.444 "cntlid": 143, 00:20:41.444 "qid": 0, 00:20:41.444 "state": "enabled", 00:20:41.444 "listen_address": { 00:20:41.444 "trtype": "TCP", 00:20:41.444 "adrfam": "IPv4", 00:20:41.444 "traddr": "10.0.0.2", 00:20:41.444 "trsvcid": "4420" 00:20:41.444 }, 00:20:41.444 "peer_address": { 00:20:41.444 "trtype": "TCP", 00:20:41.444 "adrfam": "IPv4", 00:20:41.444 "traddr": "10.0.0.1", 00:20:41.444 "trsvcid": "47682" 00:20:41.444 }, 00:20:41.444 "auth": { 00:20:41.444 "state": "completed", 00:20:41.444 "digest": "sha512", 00:20:41.444 "dhgroup": "ffdhe8192" 00:20:41.444 } 00:20:41.444 } 00:20:41.444 ]' 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.444 13:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.703 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:03:NjY0YWYzYjM0MDdiZTE5ZjQ2ZTkzY2I5NzA3MGJhMDcyNzE4MDgyOWIxMTFjZTAyYzAxNDgyOTEwNGVmNTM3ZvOBCUE=: 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.271 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key0 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:42.530 13:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:43.098 00:20:43.098 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:43.098 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.098 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:43.098 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.356 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.356 13:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.356 13:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 13:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.356 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.356 { 00:20:43.356 "cntlid": 145, 00:20:43.356 "qid": 0, 00:20:43.356 "state": "enabled", 00:20:43.356 "listen_address": { 00:20:43.356 "trtype": "TCP", 00:20:43.356 "adrfam": "IPv4", 00:20:43.356 "traddr": "10.0.0.2", 00:20:43.356 "trsvcid": "4420" 00:20:43.356 }, 00:20:43.356 "peer_address": { 00:20:43.356 "trtype": "TCP", 00:20:43.356 "adrfam": "IPv4", 00:20:43.356 "traddr": "10.0.0.1", 00:20:43.356 "trsvcid": "47706" 00:20:43.356 }, 00:20:43.356 "auth": { 00:20:43.356 "state": "completed", 00:20:43.356 "digest": "sha512", 00:20:43.357 "dhgroup": "ffdhe8192" 00:20:43.357 } 00:20:43.357 } 00:20:43.357 ]' 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.357 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.615 13:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid 0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-secret DHHC-1:00:MmQzODJiYTQyOGZkMjQ5NjI2Y2RiOGYxYWFjYTMxMGJkNWZkMTBhYjMwOGZmMDhkr2FRFg==: 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --dhchap-key key1 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:44.183 13:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:44.787 request: 00:20:44.787 { 00:20:44.787 "name": "nvme0", 00:20:44.787 "trtype": "tcp", 00:20:44.787 "traddr": "10.0.0.2", 00:20:44.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c", 00:20:44.787 "adrfam": "ipv4", 00:20:44.787 "trsvcid": "4420", 00:20:44.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:44.787 "dhchap_key": "key2", 00:20:44.787 "method": "bdev_nvme_attach_controller", 00:20:44.787 "req_id": 1 00:20:44.787 } 00:20:44.787 Got JSON-RPC error response 00:20:44.787 response: 00:20:44.787 { 00:20:44.787 "code": -32602, 00:20:44.787 "message": "Invalid parameters" 00:20:44.787 } 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68529 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 68529 ']' 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 68529 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68529 00:20:44.787 killing process with pid 68529 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68529' 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 68529 00:20:44.787 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 68529 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.051 rmmod nvme_tcp 00:20:45.051 rmmod nvme_fabrics 00:20:45.051 rmmod nvme_keyring 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 68506 ']' 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 68506 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 68506 ']' 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 68506 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.051 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68506 00:20:45.312 killing process with pid 68506 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68506' 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 68506 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 68506 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.312 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.571 13:58:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:45.571 13:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wuc /tmp/spdk.key-sha256.Aex /tmp/spdk.key-sha384.FSD /tmp/spdk.key-sha512.kL4 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:20:45.571 ************************************ 00:20:45.571 END TEST nvmf_auth_target 00:20:45.571 ************************************ 00:20:45.571 00:20:45.571 real 2m13.864s 00:20:45.571 user 5m8.459s 00:20:45.571 sys 0m28.912s 00:20:45.571 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:45.571 13:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.571 13:58:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:45.571 13:58:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:45.571 13:58:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:45.571 13:58:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:45.571 13:58:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.571 ************************************ 00:20:45.571 START TEST nvmf_bdevio_no_huge 00:20:45.571 ************************************ 00:20:45.571 13:58:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:45.571 * Looking for test storage... 00:20:45.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.571 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.572 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:45.832 Cannot find device "nvmf_tgt_br" 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.832 Cannot find device "nvmf_tgt_br2" 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:45.832 Cannot find device "nvmf_tgt_br" 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:45.832 Cannot find device "nvmf_tgt_br2" 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.832 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:20:46.091 00:20:46.091 --- 10.0.0.2 ping statistics --- 00:20:46.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.091 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:20:46.091 00:20:46.091 --- 10.0.0.3 ping statistics --- 00:20:46.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.091 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:46.091 00:20:46.091 --- 10.0.0.1 ping statistics --- 00:20:46.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.091 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71394 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71394 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 71394 ']' 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.091 13:58:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.091 [2024-05-15 13:58:44.602670] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:46.091 [2024-05-15 13:58:44.602750] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:46.350 [2024-05-15 13:58:44.745529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.350 [2024-05-15 13:58:44.870497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.350 [2024-05-15 13:58:44.870540] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.350 [2024-05-15 13:58:44.870549] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.350 [2024-05-15 13:58:44.870557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.350 [2024-05-15 13:58:44.870564] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.350 [2024-05-15 13:58:44.870665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.350 [2024-05-15 13:58:44.870841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:46.350 [2024-05-15 13:58:44.871019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:46.350 [2024-05-15 13:58:44.871490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.918 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:46.918 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:20:46.918 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.918 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.918 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.177 [2024-05-15 13:58:45.510752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.177 Malloc0 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.177 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.178 [2024-05-15 13:58:45.554650] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:47.178 [2024-05-15 13:58:45.555030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.178 { 00:20:47.178 "params": { 00:20:47.178 "name": "Nvme$subsystem", 00:20:47.178 "trtype": "$TEST_TRANSPORT", 00:20:47.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.178 "adrfam": "ipv4", 00:20:47.178 "trsvcid": "$NVMF_PORT", 00:20:47.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.178 "hdgst": ${hdgst:-false}, 00:20:47.178 "ddgst": ${ddgst:-false} 00:20:47.178 }, 00:20:47.178 "method": "bdev_nvme_attach_controller" 00:20:47.178 } 00:20:47.178 EOF 00:20:47.178 )") 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:47.178 13:58:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.178 "params": { 00:20:47.178 "name": "Nvme1", 00:20:47.178 "trtype": "tcp", 00:20:47.178 "traddr": "10.0.0.2", 00:20:47.178 "adrfam": "ipv4", 00:20:47.178 "trsvcid": "4420", 00:20:47.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.178 "hdgst": false, 00:20:47.178 "ddgst": false 00:20:47.178 }, 00:20:47.178 "method": "bdev_nvme_attach_controller" 00:20:47.178 }' 00:20:47.178 [2024-05-15 13:58:45.607304] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:47.178 [2024-05-15 13:58:45.607744] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71430 ] 00:20:47.436 [2024-05-15 13:58:45.745920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.436 [2024-05-15 13:58:45.871429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.436 [2024-05-15 13:58:45.871640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.436 [2024-05-15 13:58:45.871640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.696 I/O targets: 00:20:47.696 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:47.696 00:20:47.696 00:20:47.696 CUnit - A unit testing framework for C - Version 2.1-3 00:20:47.696 http://cunit.sourceforge.net/ 00:20:47.696 00:20:47.696 00:20:47.696 Suite: bdevio tests on: Nvme1n1 00:20:47.696 Test: blockdev write read block ...passed 00:20:47.696 Test: blockdev write zeroes read block ...passed 00:20:47.696 Test: blockdev write zeroes read no split ...passed 00:20:47.696 Test: blockdev write zeroes read split ...passed 00:20:47.696 Test: blockdev write zeroes read split partial ...passed 00:20:47.696 Test: blockdev reset ...[2024-05-15 13:58:46.083130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.696 [2024-05-15 13:58:46.083229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229d310 (9): Bad file descriptor 00:20:47.696 [2024-05-15 13:58:46.103555] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:47.696 passed 00:20:47.696 Test: blockdev write read 8 blocks ...passed 00:20:47.696 Test: blockdev write read size > 128k ...passed 00:20:47.696 Test: blockdev write read invalid size ...passed 00:20:47.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:47.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:47.696 Test: blockdev write read max offset ...passed 00:20:47.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:47.696 Test: blockdev writev readv 8 blocks ...passed 00:20:47.696 Test: blockdev writev readv 30 x 1block ...passed 00:20:47.696 Test: blockdev writev readv block ...passed 00:20:47.696 Test: blockdev writev readv size > 128k ...passed 00:20:47.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:47.696 Test: blockdev comparev and writev ...[2024-05-15 13:58:46.111035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.111109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.111487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.111541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.111922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.111973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.111989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.112396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.112431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.112455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.696 [2024-05-15 13:58:46.112471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:47.696 passed 00:20:47.696 Test: blockdev nvme passthru rw ...passed 00:20:47.696 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:58:46.113468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.696 [2024-05-15 13:58:46.113515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.113627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.696 [2024-05-15 13:58:46.113650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.113762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.696 [2024-05-15 13:58:46.113788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:47.696 [2024-05-15 13:58:46.113892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.696 [2024-05-15 13:58:46.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:47.696 passed 00:20:47.696 Test: blockdev nvme admin passthru ...passed 00:20:47.696 Test: blockdev copy ...passed 00:20:47.696 00:20:47.696 Run Summary: Type Total Ran Passed Failed Inactive 00:20:47.696 suites 1 1 n/a 0 0 00:20:47.696 tests 23 23 23 0 0 00:20:47.696 asserts 152 152 152 0 n/a 00:20:47.696 00:20:47.696 Elapsed time = 0.182 seconds 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.263 rmmod nvme_tcp 00:20:48.263 rmmod nvme_fabrics 00:20:48.263 rmmod nvme_keyring 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71394 ']' 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71394 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 71394 ']' 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 71394 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71394 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71394' 00:20:48.263 killing process with pid 71394 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 71394 00:20:48.263 [2024-05-15 13:58:46.688660] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:48.263 13:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 71394 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:48.830 00:20:48.830 real 0m3.216s 00:20:48.830 user 0m9.940s 00:20:48.830 sys 0m1.414s 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:48.830 13:58:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:48.830 ************************************ 00:20:48.830 END TEST nvmf_bdevio_no_huge 00:20:48.830 ************************************ 00:20:48.830 13:58:47 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:48.830 13:58:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:48.830 13:58:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.830 13:58:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.830 ************************************ 00:20:48.830 START TEST nvmf_tls 00:20:48.830 ************************************ 00:20:48.830 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:48.830 * Looking for test storage... 00:20:48.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:48.830 13:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:49.089 Cannot find device "nvmf_tgt_br" 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.089 Cannot find device "nvmf_tgt_br2" 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:49.089 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:49.089 Cannot find device "nvmf_tgt_br" 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:49.090 Cannot find device "nvmf_tgt_br2" 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.090 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:49.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:49.348 00:20:49.348 --- 10.0.0.2 ping statistics --- 00:20:49.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.348 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:49.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:49.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:49.348 00:20:49.348 --- 10.0.0.3 ping statistics --- 00:20:49.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.348 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:49.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:49.348 00:20:49.348 --- 10.0.0.1 ping statistics --- 00:20:49.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.348 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.348 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=71609 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 71609 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 71609 ']' 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.349 13:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.349 [2024-05-15 13:58:47.893070] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:20:49.349 [2024-05-15 13:58:47.893139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.607 [2024-05-15 13:58:48.035731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.607 [2024-05-15 13:58:48.136025] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.607 [2024-05-15 13:58:48.136068] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.607 [2024-05-15 13:58:48.136077] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.607 [2024-05-15 13:58:48.136085] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.607 [2024-05-15 13:58:48.136092] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.607 [2024-05-15 13:58:48.136121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:50.544 13:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:50.544 true 00:20:50.544 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.544 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:50.803 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:50.803 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:50.803 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:51.062 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.062 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:51.062 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:51.062 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:51.062 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:51.320 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.320 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:51.579 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:51.579 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:51.579 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.579 13:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:51.838 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:51.838 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:51.838 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:52.097 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.097 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:52.097 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:52.097 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:52.097 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:52.355 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:52.356 13:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.c9xUykpqdE 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.H4zdTSHLf6 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.615 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.c9xUykpqdE 00:20:52.874 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.H4zdTSHLf6 00:20:52.874 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:52.874 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:53.133 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.c9xUykpqdE 00:20:53.133 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.c9xUykpqdE 00:20:53.133 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.391 [2024-05-15 13:58:51.843144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.391 13:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.650 13:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.909 [2024-05-15 13:58:52.262491] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:53.909 [2024-05-15 13:58:52.262588] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.909 [2024-05-15 13:58:52.262798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.909 13:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.167 malloc0 00:20:54.167 13:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.167 13:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c9xUykpqdE 00:20:54.425 [2024-05-15 13:58:52.830957] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:54.425 13:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.c9xUykpqdE 00:21:06.674 Initializing NVMe Controllers 00:21:06.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:06.674 Initialization complete. Launching workers. 00:21:06.674 ======================================================== 00:21:06.674 Latency(us) 00:21:06.674 Device Information : IOPS MiB/s Average min max 00:21:06.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14204.09 55.48 4506.33 940.63 15600.77 00:21:06.674 ======================================================== 00:21:06.674 Total : 14204.09 55.48 4506.33 940.63 15600.77 00:21:06.674 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c9xUykpqdE 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.c9xUykpqdE' 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71835 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71835 /var/tmp/bdevperf.sock 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 71835 ']' 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.674 [2024-05-15 13:59:03.069323] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:06.674 [2024-05-15 13:59:03.069404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71835 ] 00:21:06.674 [2024-05-15 13:59:03.209080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.674 [2024-05-15 13:59:03.306674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:06.674 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c9xUykpqdE 00:21:06.674 [2024-05-15 13:59:04.079091] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.674 [2024-05-15 13:59:04.079195] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.674 TLSTESTn1 00:21:06.674 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.674 Running I/O for 10 seconds... 00:21:16.653 00:21:16.653 Latency(us) 00:21:16.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.653 Verification LBA range: start 0x0 length 0x2000 00:21:16.653 TLSTESTn1 : 10.01 5585.60 21.82 0.00 0.00 22878.95 4579.62 30530.83 00:21:16.653 =================================================================================================================== 00:21:16.653 Total : 5585.60 21.82 0.00 0.00 22878.95 4579.62 30530.83 00:21:16.653 0 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 71835 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 71835 ']' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 71835 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71835 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71835' 00:21:16.653 killing process with pid 71835 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 71835 00:21:16.653 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.653 00:21:16.653 Latency(us) 00:21:16.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.653 =================================================================================================================== 00:21:16.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.653 [2024-05-15 13:59:14.296762] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 71835 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4zdTSHLf6 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4zdTSHLf6 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4zdTSHLf6 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H4zdTSHLf6' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71963 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71963 /var/tmp/bdevperf.sock 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 71963 ']' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:16.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:16.653 13:59:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.653 [2024-05-15 13:59:14.574143] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:16.653 [2024-05-15 13:59:14.574228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71963 ] 00:21:16.653 [2024-05-15 13:59:14.720734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.653 [2024-05-15 13:59:14.828128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.912 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.912 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:16.912 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H4zdTSHLf6 00:21:17.170 [2024-05-15 13:59:15.630980] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.170 [2024-05-15 13:59:15.631097] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:17.171 [2024-05-15 13:59:15.641933] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.171 [2024-05-15 13:59:15.642295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c990 (107): Transport endpoint is not connected 00:21:17.171 [2024-05-15 13:59:15.643281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c990 (9): Bad file descriptor 00:21:17.171 [2024-05-15 13:59:15.644277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:17.171 [2024-05-15 13:59:15.644300] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:17.171 [2024-05-15 13:59:15.644310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.171 request: 00:21:17.171 { 00:21:17.171 "name": "TLSTEST", 00:21:17.171 "trtype": "tcp", 00:21:17.171 "traddr": "10.0.0.2", 00:21:17.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.171 "adrfam": "ipv4", 00:21:17.171 "trsvcid": "4420", 00:21:17.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.171 "psk": "/tmp/tmp.H4zdTSHLf6", 00:21:17.171 "method": "bdev_nvme_attach_controller", 00:21:17.171 "req_id": 1 00:21:17.171 } 00:21:17.171 Got JSON-RPC error response 00:21:17.171 response: 00:21:17.171 { 00:21:17.171 "code": -32602, 00:21:17.171 "message": "Invalid parameters" 00:21:17.171 } 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 71963 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 71963 ']' 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 71963 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71963 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:17.171 killing process with pid 71963 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71963' 00:21:17.171 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.171 00:21:17.171 Latency(us) 00:21:17.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.171 =================================================================================================================== 00:21:17.171 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 71963 00:21:17.171 [2024-05-15 13:59:15.711232] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:17.171 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 71963 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c9xUykpqdE 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c9xUykpqdE 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c9xUykpqdE 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.c9xUykpqdE' 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71985 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71985 /var/tmp/bdevperf.sock 00:21:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 71985 ']' 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.429 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.688 [2024-05-15 13:59:15.995524] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:17.688 [2024-05-15 13:59:15.996611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71985 ] 00:21:17.688 [2024-05-15 13:59:16.149334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.947 [2024-05-15 13:59:16.274765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.513 13:59:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.513 13:59:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:18.513 13:59:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.c9xUykpqdE 00:21:18.513 [2024-05-15 13:59:17.038191] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.513 [2024-05-15 13:59:17.038314] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.513 [2024-05-15 13:59:17.049655] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.513 [2024-05-15 13:59:17.049703] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.513 [2024-05-15 13:59:17.049773] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.513 [2024-05-15 13:59:17.050585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535990 (107): Transport endpoint is not connected 00:21:18.513 [2024-05-15 13:59:17.051570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535990 (9): Bad file descriptor 00:21:18.513 [2024-05-15 13:59:17.052567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.513 [2024-05-15 13:59:17.052589] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.513 [2024-05-15 13:59:17.052598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.513 request: 00:21:18.513 { 00:21:18.513 "name": "TLSTEST", 00:21:18.513 "trtype": "tcp", 00:21:18.513 "traddr": "10.0.0.2", 00:21:18.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.513 "adrfam": "ipv4", 00:21:18.513 "trsvcid": "4420", 00:21:18.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.513 "psk": "/tmp/tmp.c9xUykpqdE", 00:21:18.513 "method": "bdev_nvme_attach_controller", 00:21:18.513 "req_id": 1 00:21:18.513 } 00:21:18.513 Got JSON-RPC error response 00:21:18.513 response: 00:21:18.513 { 00:21:18.513 "code": -32602, 00:21:18.513 "message": "Invalid parameters" 00:21:18.513 } 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 71985 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 71985 ']' 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 71985 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71985 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71985' 00:21:18.770 killing process with pid 71985 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 71985 00:21:18.770 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.770 00:21:18.770 Latency(us) 00:21:18.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.770 =================================================================================================================== 00:21:18.770 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.770 [2024-05-15 13:59:17.114785] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.770 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 71985 00:21:19.026 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c9xUykpqdE 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c9xUykpqdE 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c9xUykpqdE 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.c9xUykpqdE' 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72018 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72018 /var/tmp/bdevperf.sock 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72018 ']' 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.027 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.027 [2024-05-15 13:59:17.387918] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:19.027 [2024-05-15 13:59:17.388251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:21:19.027 [2024-05-15 13:59:17.529167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.301 [2024-05-15 13:59:17.637676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.890 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.890 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:19.890 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c9xUykpqdE 00:21:20.149 [2024-05-15 13:59:18.484945] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.149 [2024-05-15 13:59:18.485100] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.149 [2024-05-15 13:59:18.489916] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.149 [2024-05-15 13:59:18.489953] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.149 [2024-05-15 13:59:18.490003] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.149 [2024-05-15 13:59:18.490646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387990 (107): Transport endpoint is not connected 00:21:20.149 [2024-05-15 13:59:18.491625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387990 (9): Bad file descriptor 00:21:20.149 [2024-05-15 13:59:18.492622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:20.149 [2024-05-15 13:59:18.492652] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.149 [2024-05-15 13:59:18.492667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:20.149 request: 00:21:20.149 { 00:21:20.149 "name": "TLSTEST", 00:21:20.149 "trtype": "tcp", 00:21:20.149 "traddr": "10.0.0.2", 00:21:20.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.149 "adrfam": "ipv4", 00:21:20.149 "trsvcid": "4420", 00:21:20.149 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.149 "psk": "/tmp/tmp.c9xUykpqdE", 00:21:20.149 "method": "bdev_nvme_attach_controller", 00:21:20.149 "req_id": 1 00:21:20.149 } 00:21:20.149 Got JSON-RPC error response 00:21:20.149 response: 00:21:20.149 { 00:21:20.149 "code": -32602, 00:21:20.149 "message": "Invalid parameters" 00:21:20.149 } 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72018 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72018 ']' 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72018 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72018 00:21:20.149 killing process with pid 72018 00:21:20.149 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.149 00:21:20.149 Latency(us) 00:21:20.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.149 =================================================================================================================== 00:21:20.149 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72018' 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72018 00:21:20.149 [2024-05-15 13:59:18.534137] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.149 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72018 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72040 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72040 /var/tmp/bdevperf.sock 00:21:20.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72040 ']' 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.406 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:20.407 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.407 [2024-05-15 13:59:18.813884] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:20.407 [2024-05-15 13:59:18.813966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72040 ] 00:21:20.407 [2024-05-15 13:59:18.957190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.665 [2024-05-15 13:59:19.062785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.232 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:21.232 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:21.232 13:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:21.574 [2024-05-15 13:59:19.885354] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.574 [2024-05-15 13:59:19.887952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a76c80 (9): Bad file descriptor 00:21:21.574 [2024-05-15 13:59:19.888944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:21.574 [2024-05-15 13:59:19.889105] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:21.574 [2024-05-15 13:59:19.889218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.574 request: 00:21:21.574 { 00:21:21.574 "name": "TLSTEST", 00:21:21.574 "trtype": "tcp", 00:21:21.574 "traddr": "10.0.0.2", 00:21:21.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.574 "adrfam": "ipv4", 00:21:21.574 "trsvcid": "4420", 00:21:21.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.574 "method": "bdev_nvme_attach_controller", 00:21:21.574 "req_id": 1 00:21:21.574 } 00:21:21.574 Got JSON-RPC error response 00:21:21.574 response: 00:21:21.574 { 00:21:21.574 "code": -32602, 00:21:21.574 "message": "Invalid parameters" 00:21:21.574 } 00:21:21.574 13:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72040 00:21:21.574 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72040 ']' 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72040 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72040 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72040' 00:21:21.575 killing process with pid 72040 00:21:21.575 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.575 00:21:21.575 Latency(us) 00:21:21.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.575 =================================================================================================================== 00:21:21.575 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72040 00:21:21.575 13:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72040 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 71609 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 71609 ']' 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 71609 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71609 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:21.835 killing process with pid 71609 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71609' 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 71609 00:21:21.835 [2024-05-15 13:59:20.207322] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:21.835 [2024-05-15 13:59:20.207362] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.835 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 71609 00:21:22.093 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Xz9RBDbITt 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Xz9RBDbITt 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72078 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72078 00:21:22.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72078 ']' 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.094 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 [2024-05-15 13:59:20.562213] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:22.094 [2024-05-15 13:59:20.562631] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.353 [2024-05-15 13:59:20.715767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.353 [2024-05-15 13:59:20.861911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.353 [2024-05-15 13:59:20.861992] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.353 [2024-05-15 13:59:20.862004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.353 [2024-05-15 13:59:20.862014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.353 [2024-05-15 13:59:20.862022] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.353 [2024-05-15 13:59:20.862064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.921 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:22.921 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:22.921 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.921 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.921 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.180 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.180 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:23.180 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Xz9RBDbITt 00:21:23.180 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.180 [2024-05-15 13:59:21.675417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.180 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:23.438 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.698 [2024-05-15 13:59:22.111040] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:23.698 [2024-05-15 13:59:22.111174] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.698 [2024-05-15 13:59:22.111413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.698 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.957 malloc0 00:21:23.957 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:24.216 [2024-05-15 13:59:22.730006] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xz9RBDbITt 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Xz9RBDbITt' 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72127 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72127 /var/tmp/bdevperf.sock 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72127 ']' 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:24.216 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.474 [2024-05-15 13:59:22.796821] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:24.474 [2024-05-15 13:59:22.796901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72127 ] 00:21:24.474 [2024-05-15 13:59:22.941107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.733 [2024-05-15 13:59:23.046323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.302 13:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:25.302 13:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:25.302 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:25.560 [2024-05-15 13:59:23.892204] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.560 [2024-05-15 13:59:23.892600] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:25.560 TLSTESTn1 00:21:25.560 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:25.560 Running I/O for 10 seconds... 00:21:35.560 00:21:35.560 Latency(us) 00:21:35.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.560 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.560 Verification LBA range: start 0x0 length 0x2000 00:21:35.560 TLSTESTn1 : 10.01 5194.73 20.29 0.00 0.00 24601.54 5027.06 36215.88 00:21:35.560 =================================================================================================================== 00:21:35.560 Total : 5194.73 20.29 0.00 0.00 24601.54 5027.06 36215.88 00:21:35.560 0 00:21:35.560 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.560 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72127 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72127 ']' 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72127 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72127 00:21:35.818 killing process with pid 72127 00:21:35.818 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.818 00:21:35.818 Latency(us) 00:21:35.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.818 =================================================================================================================== 00:21:35.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72127' 00:21:35.818 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72127 00:21:35.819 [2024-05-15 13:59:34.157190] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.819 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72127 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Xz9RBDbITt 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xz9RBDbITt 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xz9RBDbITt 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xz9RBDbITt 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Xz9RBDbITt' 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72261 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72261 /var/tmp/bdevperf.sock 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72261 ']' 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.077 13:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.077 [2024-05-15 13:59:34.427485] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:36.077 [2024-05-15 13:59:34.427567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72261 ] 00:21:36.077 [2024-05-15 13:59:34.569265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.337 [2024-05-15 13:59:34.673588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.906 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.906 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:36.906 13:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:37.165 [2024-05-15 13:59:35.490592] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.165 [2024-05-15 13:59:35.490671] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:37.165 [2024-05-15 13:59:35.490681] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Xz9RBDbITt 00:21:37.165 request: 00:21:37.165 { 00:21:37.165 "name": "TLSTEST", 00:21:37.165 "trtype": "tcp", 00:21:37.165 "traddr": "10.0.0.2", 00:21:37.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.165 "adrfam": "ipv4", 00:21:37.165 "trsvcid": "4420", 00:21:37.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.165 "psk": "/tmp/tmp.Xz9RBDbITt", 00:21:37.165 "method": "bdev_nvme_attach_controller", 00:21:37.165 "req_id": 1 00:21:37.165 } 00:21:37.165 Got JSON-RPC error response 00:21:37.165 response: 00:21:37.165 { 00:21:37.165 "code": -1, 00:21:37.165 "message": "Operation not permitted" 00:21:37.165 } 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72261 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72261 ']' 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72261 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72261 00:21:37.165 killing process with pid 72261 00:21:37.165 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.165 00:21:37.165 Latency(us) 00:21:37.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.165 =================================================================================================================== 00:21:37.165 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72261' 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72261 00:21:37.165 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72261 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 72078 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72078 ']' 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72078 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:37.423 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72078 00:21:37.424 killing process with pid 72078 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72078' 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72078 00:21:37.424 [2024-05-15 13:59:35.795403] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:37.424 [2024-05-15 13:59:35.795445] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:37.424 13:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72078 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72294 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72294 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72294 ']' 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:37.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:37.682 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.682 [2024-05-15 13:59:36.083361] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:37.682 [2024-05-15 13:59:36.083432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.682 [2024-05-15 13:59:36.227330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.941 [2024-05-15 13:59:36.326662] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.941 [2024-05-15 13:59:36.326711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.941 [2024-05-15 13:59:36.326721] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.941 [2024-05-15 13:59:36.326745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.941 [2024-05-15 13:59:36.326767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.941 [2024-05-15 13:59:36.326795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Xz9RBDbITt 00:21:38.508 13:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:38.767 [2024-05-15 13:59:37.190547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.767 13:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.025 13:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.282 [2024-05-15 13:59:37.606071] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:39.282 [2024-05-15 13:59:37.606204] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.282 [2024-05-15 13:59:37.606430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.282 13:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.282 malloc0 00:21:39.540 13:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.540 13:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:39.799 [2024-05-15 13:59:38.217884] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:39.799 [2024-05-15 13:59:38.217956] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:39.799 [2024-05-15 13:59:38.217993] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:39.799 request: 00:21:39.799 { 00:21:39.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.799 "host": "nqn.2016-06.io.spdk:host1", 00:21:39.799 "psk": "/tmp/tmp.Xz9RBDbITt", 00:21:39.799 "method": "nvmf_subsystem_add_host", 00:21:39.799 "req_id": 1 00:21:39.799 } 00:21:39.799 Got JSON-RPC error response 00:21:39.799 response: 00:21:39.799 { 00:21:39.799 "code": -32603, 00:21:39.799 "message": "Internal error" 00:21:39.799 } 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 72294 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72294 ']' 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72294 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72294 00:21:39.799 killing process with pid 72294 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72294' 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72294 00:21:39.799 [2024-05-15 13:59:38.275207] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:39.799 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72294 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Xz9RBDbITt 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72351 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72351 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72351 ']' 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:40.366 13:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.366 [2024-05-15 13:59:38.709682] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:40.366 [2024-05-15 13:59:38.709783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.366 [2024-05-15 13:59:38.852582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.625 [2024-05-15 13:59:38.990875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.625 [2024-05-15 13:59:38.990944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.625 [2024-05-15 13:59:38.990954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.625 [2024-05-15 13:59:38.990962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.625 [2024-05-15 13:59:38.990970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.625 [2024-05-15 13:59:38.991007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Xz9RBDbITt 00:21:41.195 13:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.460 [2024-05-15 13:59:39.875247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.460 13:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:41.732 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:41.732 [2024-05-15 13:59:40.266814] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:41.732 [2024-05-15 13:59:40.266922] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.732 [2024-05-15 13:59:40.267148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.000 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.000 malloc0 00:21:42.000 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.257 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:42.514 [2024-05-15 13:59:40.877555] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72406 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72406 /var/tmp/bdevperf.sock 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72406 ']' 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.514 13:59:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 [2024-05-15 13:59:40.945930] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:42.514 [2024-05-15 13:59:40.946009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72406 ] 00:21:42.772 [2024-05-15 13:59:41.089479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.772 [2024-05-15 13:59:41.195201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.336 13:59:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:43.336 13:59:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:43.336 13:59:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:43.593 [2024-05-15 13:59:41.977441] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.593 [2024-05-15 13:59:41.979218] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.593 TLSTESTn1 00:21:43.593 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:43.850 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:43.850 "subsystems": [ 00:21:43.850 { 00:21:43.850 "subsystem": "keyring", 00:21:43.850 "config": [] 00:21:43.850 }, 00:21:43.850 { 00:21:43.850 "subsystem": "iobuf", 00:21:43.850 "config": [ 00:21:43.850 { 00:21:43.850 "method": "iobuf_set_options", 00:21:43.850 "params": { 00:21:43.850 "small_pool_count": 8192, 00:21:43.850 "large_pool_count": 1024, 00:21:43.850 "small_bufsize": 8192, 00:21:43.850 "large_bufsize": 135168 00:21:43.851 } 00:21:43.851 } 00:21:43.851 ] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "sock", 00:21:43.851 "config": [ 00:21:43.851 { 00:21:43.851 "method": "sock_impl_set_options", 00:21:43.851 "params": { 00:21:43.851 "impl_name": "uring", 00:21:43.851 "recv_buf_size": 2097152, 00:21:43.851 "send_buf_size": 2097152, 00:21:43.851 "enable_recv_pipe": true, 00:21:43.851 "enable_quickack": false, 00:21:43.851 "enable_placement_id": 0, 00:21:43.851 "enable_zerocopy_send_server": false, 00:21:43.851 "enable_zerocopy_send_client": false, 00:21:43.851 "zerocopy_threshold": 0, 00:21:43.851 "tls_version": 0, 00:21:43.851 "enable_ktls": false 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "sock_impl_set_options", 00:21:43.851 "params": { 00:21:43.851 "impl_name": "posix", 00:21:43.851 "recv_buf_size": 2097152, 00:21:43.851 "send_buf_size": 2097152, 00:21:43.851 "enable_recv_pipe": true, 00:21:43.851 "enable_quickack": false, 00:21:43.851 "enable_placement_id": 0, 00:21:43.851 "enable_zerocopy_send_server": true, 00:21:43.851 "enable_zerocopy_send_client": false, 00:21:43.851 "zerocopy_threshold": 0, 00:21:43.851 "tls_version": 0, 00:21:43.851 "enable_ktls": false 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "sock_impl_set_options", 00:21:43.851 "params": { 00:21:43.851 "impl_name": "ssl", 00:21:43.851 "recv_buf_size": 4096, 00:21:43.851 "send_buf_size": 4096, 00:21:43.851 "enable_recv_pipe": true, 00:21:43.851 "enable_quickack": false, 00:21:43.851 "enable_placement_id": 0, 00:21:43.851 "enable_zerocopy_send_server": true, 00:21:43.851 "enable_zerocopy_send_client": false, 00:21:43.851 "zerocopy_threshold": 0, 00:21:43.851 "tls_version": 0, 00:21:43.851 "enable_ktls": false 00:21:43.851 } 00:21:43.851 } 00:21:43.851 ] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "vmd", 00:21:43.851 "config": [] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "accel", 00:21:43.851 "config": [ 00:21:43.851 { 00:21:43.851 "method": "accel_set_options", 00:21:43.851 "params": { 00:21:43.851 "small_cache_size": 128, 00:21:43.851 "large_cache_size": 16, 00:21:43.851 "task_count": 2048, 00:21:43.851 "sequence_count": 2048, 00:21:43.851 "buf_count": 2048 00:21:43.851 } 00:21:43.851 } 00:21:43.851 ] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "bdev", 00:21:43.851 "config": [ 00:21:43.851 { 00:21:43.851 "method": "bdev_set_options", 00:21:43.851 "params": { 00:21:43.851 "bdev_io_pool_size": 65535, 00:21:43.851 "bdev_io_cache_size": 256, 00:21:43.851 "bdev_auto_examine": true, 00:21:43.851 "iobuf_small_cache_size": 128, 00:21:43.851 "iobuf_large_cache_size": 16 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_raid_set_options", 00:21:43.851 "params": { 00:21:43.851 "process_window_size_kb": 1024 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_iscsi_set_options", 00:21:43.851 "params": { 00:21:43.851 "timeout_sec": 30 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_nvme_set_options", 00:21:43.851 "params": { 00:21:43.851 "action_on_timeout": "none", 00:21:43.851 "timeout_us": 0, 00:21:43.851 "timeout_admin_us": 0, 00:21:43.851 "keep_alive_timeout_ms": 10000, 00:21:43.851 "arbitration_burst": 0, 00:21:43.851 "low_priority_weight": 0, 00:21:43.851 "medium_priority_weight": 0, 00:21:43.851 "high_priority_weight": 0, 00:21:43.851 "nvme_adminq_poll_period_us": 10000, 00:21:43.851 "nvme_ioq_poll_period_us": 0, 00:21:43.851 "io_queue_requests": 0, 00:21:43.851 "delay_cmd_submit": true, 00:21:43.851 "transport_retry_count": 4, 00:21:43.851 "bdev_retry_count": 3, 00:21:43.851 "transport_ack_timeout": 0, 00:21:43.851 "ctrlr_loss_timeout_sec": 0, 00:21:43.851 "reconnect_delay_sec": 0, 00:21:43.851 "fast_io_fail_timeout_sec": 0, 00:21:43.851 "disable_auto_failback": false, 00:21:43.851 "generate_uuids": false, 00:21:43.851 "transport_tos": 0, 00:21:43.851 "nvme_error_stat": false, 00:21:43.851 "rdma_srq_size": 0, 00:21:43.851 "io_path_stat": false, 00:21:43.851 "allow_accel_sequence": false, 00:21:43.851 "rdma_max_cq_size": 0, 00:21:43.851 "rdma_cm_event_timeout_ms": 0, 00:21:43.851 "dhchap_digests": [ 00:21:43.851 "sha256", 00:21:43.851 "sha384", 00:21:43.851 "sha512" 00:21:43.851 ], 00:21:43.851 "dhchap_dhgroups": [ 00:21:43.851 "null", 00:21:43.851 "ffdhe2048", 00:21:43.851 "ffdhe3072", 00:21:43.851 "ffdhe4096", 00:21:43.851 "ffdhe6144", 00:21:43.851 "ffdhe8192" 00:21:43.851 ] 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_nvme_set_hotplug", 00:21:43.851 "params": { 00:21:43.851 "period_us": 100000, 00:21:43.851 "enable": false 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_malloc_create", 00:21:43.851 "params": { 00:21:43.851 "name": "malloc0", 00:21:43.851 "num_blocks": 8192, 00:21:43.851 "block_size": 4096, 00:21:43.851 "physical_block_size": 4096, 00:21:43.851 "uuid": "44f886d2-2c96-41ce-8aa0-300c2998434d", 00:21:43.851 "optimal_io_boundary": 0 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "bdev_wait_for_examine" 00:21:43.851 } 00:21:43.851 ] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "nbd", 00:21:43.851 "config": [] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "scheduler", 00:21:43.851 "config": [ 00:21:43.851 { 00:21:43.851 "method": "framework_set_scheduler", 00:21:43.851 "params": { 00:21:43.851 "name": "static" 00:21:43.851 } 00:21:43.851 } 00:21:43.851 ] 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "subsystem": "nvmf", 00:21:43.851 "config": [ 00:21:43.851 { 00:21:43.851 "method": "nvmf_set_config", 00:21:43.851 "params": { 00:21:43.851 "discovery_filter": "match_any", 00:21:43.851 "admin_cmd_passthru": { 00:21:43.851 "identify_ctrlr": false 00:21:43.851 } 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "nvmf_set_max_subsystems", 00:21:43.851 "params": { 00:21:43.851 "max_subsystems": 1024 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "nvmf_set_crdt", 00:21:43.851 "params": { 00:21:43.851 "crdt1": 0, 00:21:43.851 "crdt2": 0, 00:21:43.851 "crdt3": 0 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "nvmf_create_transport", 00:21:43.851 "params": { 00:21:43.851 "trtype": "TCP", 00:21:43.851 "max_queue_depth": 128, 00:21:43.851 "max_io_qpairs_per_ctrlr": 127, 00:21:43.851 "in_capsule_data_size": 4096, 00:21:43.851 "max_io_size": 131072, 00:21:43.851 "io_unit_size": 131072, 00:21:43.851 "max_aq_depth": 128, 00:21:43.851 "num_shared_buffers": 511, 00:21:43.851 "buf_cache_size": 4294967295, 00:21:43.851 "dif_insert_or_strip": false, 00:21:43.851 "zcopy": false, 00:21:43.851 "c2h_success": false, 00:21:43.851 "sock_priority": 0, 00:21:43.851 "abort_timeout_sec": 1, 00:21:43.851 "ack_timeout": 0, 00:21:43.851 "data_wr_pool_size": 0 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "nvmf_create_subsystem", 00:21:43.851 "params": { 00:21:43.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.851 "allow_any_host": false, 00:21:43.851 "serial_number": "SPDK00000000000001", 00:21:43.851 "model_number": "SPDK bdev Controller", 00:21:43.851 "max_namespaces": 10, 00:21:43.851 "min_cntlid": 1, 00:21:43.851 "max_cntlid": 65519, 00:21:43.851 "ana_reporting": false 00:21:43.851 } 00:21:43.851 }, 00:21:43.851 { 00:21:43.851 "method": "nvmf_subsystem_add_host", 00:21:43.851 "params": { 00:21:43.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.852 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.852 "psk": "/tmp/tmp.Xz9RBDbITt" 00:21:43.852 } 00:21:43.852 }, 00:21:43.852 { 00:21:43.852 "method": "nvmf_subsystem_add_ns", 00:21:43.852 "params": { 00:21:43.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.852 "namespace": { 00:21:43.852 "nsid": 1, 00:21:43.852 "bdev_name": "malloc0", 00:21:43.852 "nguid": "44F886D22C9641CE8AA0300C2998434D", 00:21:43.852 "uuid": "44f886d2-2c96-41ce-8aa0-300c2998434d", 00:21:43.852 "no_auto_visible": false 00:21:43.852 } 00:21:43.852 } 00:21:43.852 }, 00:21:43.852 { 00:21:43.852 "method": "nvmf_subsystem_add_listener", 00:21:43.852 "params": { 00:21:43.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.852 "listen_address": { 00:21:43.852 "trtype": "TCP", 00:21:43.852 "adrfam": "IPv4", 00:21:43.852 "traddr": "10.0.0.2", 00:21:43.852 "trsvcid": "4420" 00:21:43.852 }, 00:21:43.852 "secure_channel": true 00:21:43.852 } 00:21:43.852 } 00:21:43.852 ] 00:21:43.852 } 00:21:43.852 ] 00:21:43.852 }' 00:21:43.852 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:44.418 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:44.418 "subsystems": [ 00:21:44.418 { 00:21:44.418 "subsystem": "keyring", 00:21:44.418 "config": [] 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "subsystem": "iobuf", 00:21:44.418 "config": [ 00:21:44.418 { 00:21:44.418 "method": "iobuf_set_options", 00:21:44.418 "params": { 00:21:44.418 "small_pool_count": 8192, 00:21:44.418 "large_pool_count": 1024, 00:21:44.418 "small_bufsize": 8192, 00:21:44.418 "large_bufsize": 135168 00:21:44.418 } 00:21:44.418 } 00:21:44.418 ] 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "subsystem": "sock", 00:21:44.418 "config": [ 00:21:44.418 { 00:21:44.418 "method": "sock_impl_set_options", 00:21:44.418 "params": { 00:21:44.418 "impl_name": "uring", 00:21:44.418 "recv_buf_size": 2097152, 00:21:44.418 "send_buf_size": 2097152, 00:21:44.418 "enable_recv_pipe": true, 00:21:44.418 "enable_quickack": false, 00:21:44.418 "enable_placement_id": 0, 00:21:44.418 "enable_zerocopy_send_server": false, 00:21:44.418 "enable_zerocopy_send_client": false, 00:21:44.418 "zerocopy_threshold": 0, 00:21:44.418 "tls_version": 0, 00:21:44.418 "enable_ktls": false 00:21:44.418 } 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "method": "sock_impl_set_options", 00:21:44.418 "params": { 00:21:44.418 "impl_name": "posix", 00:21:44.418 "recv_buf_size": 2097152, 00:21:44.418 "send_buf_size": 2097152, 00:21:44.418 "enable_recv_pipe": true, 00:21:44.418 "enable_quickack": false, 00:21:44.418 "enable_placement_id": 0, 00:21:44.418 "enable_zerocopy_send_server": true, 00:21:44.418 "enable_zerocopy_send_client": false, 00:21:44.418 "zerocopy_threshold": 0, 00:21:44.418 "tls_version": 0, 00:21:44.418 "enable_ktls": false 00:21:44.418 } 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "method": "sock_impl_set_options", 00:21:44.418 "params": { 00:21:44.418 "impl_name": "ssl", 00:21:44.418 "recv_buf_size": 4096, 00:21:44.418 "send_buf_size": 4096, 00:21:44.418 "enable_recv_pipe": true, 00:21:44.418 "enable_quickack": false, 00:21:44.418 "enable_placement_id": 0, 00:21:44.418 "enable_zerocopy_send_server": true, 00:21:44.418 "enable_zerocopy_send_client": false, 00:21:44.418 "zerocopy_threshold": 0, 00:21:44.418 "tls_version": 0, 00:21:44.418 "enable_ktls": false 00:21:44.418 } 00:21:44.418 } 00:21:44.418 ] 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "subsystem": "vmd", 00:21:44.418 "config": [] 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "subsystem": "accel", 00:21:44.418 "config": [ 00:21:44.418 { 00:21:44.418 "method": "accel_set_options", 00:21:44.418 "params": { 00:21:44.418 "small_cache_size": 128, 00:21:44.418 "large_cache_size": 16, 00:21:44.418 "task_count": 2048, 00:21:44.418 "sequence_count": 2048, 00:21:44.419 "buf_count": 2048 00:21:44.419 } 00:21:44.419 } 00:21:44.419 ] 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "subsystem": "bdev", 00:21:44.419 "config": [ 00:21:44.419 { 00:21:44.419 "method": "bdev_set_options", 00:21:44.419 "params": { 00:21:44.419 "bdev_io_pool_size": 65535, 00:21:44.419 "bdev_io_cache_size": 256, 00:21:44.419 "bdev_auto_examine": true, 00:21:44.419 "iobuf_small_cache_size": 128, 00:21:44.419 "iobuf_large_cache_size": 16 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_raid_set_options", 00:21:44.419 "params": { 00:21:44.419 "process_window_size_kb": 1024 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_iscsi_set_options", 00:21:44.419 "params": { 00:21:44.419 "timeout_sec": 30 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_nvme_set_options", 00:21:44.419 "params": { 00:21:44.419 "action_on_timeout": "none", 00:21:44.419 "timeout_us": 0, 00:21:44.419 "timeout_admin_us": 0, 00:21:44.419 "keep_alive_timeout_ms": 10000, 00:21:44.419 "arbitration_burst": 0, 00:21:44.419 "low_priority_weight": 0, 00:21:44.419 "medium_priority_weight": 0, 00:21:44.419 "high_priority_weight": 0, 00:21:44.419 "nvme_adminq_poll_period_us": 10000, 00:21:44.419 "nvme_ioq_poll_period_us": 0, 00:21:44.419 "io_queue_requests": 512, 00:21:44.419 "delay_cmd_submit": true, 00:21:44.419 "transport_retry_count": 4, 00:21:44.419 "bdev_retry_count": 3, 00:21:44.419 "transport_ack_timeout": 0, 00:21:44.419 "ctrlr_loss_timeout_sec": 0, 00:21:44.419 "reconnect_delay_sec": 0, 00:21:44.419 "fast_io_fail_timeout_sec": 0, 00:21:44.419 "disable_auto_failback": false, 00:21:44.419 "generate_uuids": false, 00:21:44.419 "transport_tos": 0, 00:21:44.419 "nvme_error_stat": false, 00:21:44.419 "rdma_srq_size": 0, 00:21:44.419 "io_path_stat": false, 00:21:44.419 "allow_accel_sequence": false, 00:21:44.419 "rdma_max_cq_size": 0, 00:21:44.419 "rdma_cm_event_timeout_ms": 0, 00:21:44.419 "dhchap_digests": [ 00:21:44.419 "sha256", 00:21:44.419 "sha384", 00:21:44.419 "sha512" 00:21:44.419 ], 00:21:44.419 "dhchap_dhgroups": [ 00:21:44.419 "null", 00:21:44.419 "ffdhe2048", 00:21:44.419 "ffdhe3072", 00:21:44.419 "ffdhe4096", 00:21:44.419 "ffdhe6144", 00:21:44.419 "ffdhe8192" 00:21:44.419 ] 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_nvme_attach_controller", 00:21:44.419 "params": { 00:21:44.419 "name": "TLSTEST", 00:21:44.419 "trtype": "TCP", 00:21:44.419 "adrfam": "IPv4", 00:21:44.419 "traddr": "10.0.0.2", 00:21:44.419 "trsvcid": "4420", 00:21:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.419 "prchk_reftag": false, 00:21:44.419 "prchk_guard": false, 00:21:44.419 "ctrlr_loss_timeout_sec": 0, 00:21:44.419 "reconnect_delay_sec": 0, 00:21:44.419 "fast_io_fail_timeout_sec": 0, 00:21:44.419 "psk": "/tmp/tmp.Xz9RBDbITt", 00:21:44.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.419 "hdgst": false, 00:21:44.419 "ddgst": false 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_nvme_set_hotplug", 00:21:44.419 "params": { 00:21:44.419 "period_us": 100000, 00:21:44.419 "enable": false 00:21:44.419 } 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "method": "bdev_wait_for_examine" 00:21:44.419 } 00:21:44.419 ] 00:21:44.419 }, 00:21:44.419 { 00:21:44.419 "subsystem": "nbd", 00:21:44.419 "config": [] 00:21:44.419 } 00:21:44.419 ] 00:21:44.419 }' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 72406 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72406 ']' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72406 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72406 00:21:44.419 killing process with pid 72406 00:21:44.419 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.419 00:21:44.419 Latency(us) 00:21:44.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.419 =================================================================================================================== 00:21:44.419 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72406' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72406 00:21:44.419 [2024-05-15 13:59:42.751689] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72406 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 72351 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72351 ']' 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72351 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:44.419 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.678 13:59:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72351 00:21:44.678 killing process with pid 72351 00:21:44.678 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:44.678 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:44.678 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72351' 00:21:44.678 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72351 00:21:44.678 [2024-05-15 13:59:43.007642] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:44.678 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72351 00:21:44.678 [2024-05-15 13:59:43.007702] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.937 13:59:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:44.937 13:59:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.937 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:44.937 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.937 13:59:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:44.937 "subsystems": [ 00:21:44.937 { 00:21:44.937 "subsystem": "keyring", 00:21:44.937 "config": [] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "iobuf", 00:21:44.937 "config": [ 00:21:44.937 { 00:21:44.937 "method": "iobuf_set_options", 00:21:44.937 "params": { 00:21:44.937 "small_pool_count": 8192, 00:21:44.937 "large_pool_count": 1024, 00:21:44.937 "small_bufsize": 8192, 00:21:44.937 "large_bufsize": 135168 00:21:44.937 } 00:21:44.937 } 00:21:44.937 ] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "sock", 00:21:44.937 "config": [ 00:21:44.937 { 00:21:44.937 "method": "sock_impl_set_options", 00:21:44.937 "params": { 00:21:44.937 "impl_name": "uring", 00:21:44.937 "recv_buf_size": 2097152, 00:21:44.937 "send_buf_size": 2097152, 00:21:44.937 "enable_recv_pipe": true, 00:21:44.937 "enable_quickack": false, 00:21:44.937 "enable_placement_id": 0, 00:21:44.937 "enable_zerocopy_send_server": false, 00:21:44.937 "enable_zerocopy_send_client": false, 00:21:44.937 "zerocopy_threshold": 0, 00:21:44.937 "tls_version": 0, 00:21:44.937 "enable_ktls": false 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "sock_impl_set_options", 00:21:44.937 "params": { 00:21:44.937 "impl_name": "posix", 00:21:44.937 "recv_buf_size": 2097152, 00:21:44.937 "send_buf_size": 2097152, 00:21:44.937 "enable_recv_pipe": true, 00:21:44.937 "enable_quickack": false, 00:21:44.937 "enable_placement_id": 0, 00:21:44.937 "enable_zerocopy_send_server": true, 00:21:44.937 "enable_zerocopy_send_client": false, 00:21:44.937 "zerocopy_threshold": 0, 00:21:44.937 "tls_version": 0, 00:21:44.937 "enable_ktls": false 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "sock_impl_set_options", 00:21:44.937 "params": { 00:21:44.937 "impl_name": "ssl", 00:21:44.937 "recv_buf_size": 4096, 00:21:44.937 "send_buf_size": 4096, 00:21:44.937 "enable_recv_pipe": true, 00:21:44.937 "enable_quickack": false, 00:21:44.937 "enable_placement_id": 0, 00:21:44.937 "enable_zerocopy_send_server": true, 00:21:44.937 "enable_zerocopy_send_client": false, 00:21:44.937 "zerocopy_threshold": 0, 00:21:44.937 "tls_version": 0, 00:21:44.937 "enable_ktls": false 00:21:44.937 } 00:21:44.937 } 00:21:44.937 ] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "vmd", 00:21:44.937 "config": [] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "accel", 00:21:44.937 "config": [ 00:21:44.937 { 00:21:44.937 "method": "accel_set_options", 00:21:44.937 "params": { 00:21:44.937 "small_cache_size": 128, 00:21:44.937 "large_cache_size": 16, 00:21:44.937 "task_count": 2048, 00:21:44.937 "sequence_count": 2048, 00:21:44.937 "buf_count": 2048 00:21:44.937 } 00:21:44.937 } 00:21:44.937 ] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "bdev", 00:21:44.937 "config": [ 00:21:44.937 { 00:21:44.937 "method": "bdev_set_options", 00:21:44.937 "params": { 00:21:44.937 "bdev_io_pool_size": 65535, 00:21:44.937 "bdev_io_cache_size": 256, 00:21:44.937 "bdev_auto_examine": true, 00:21:44.937 "iobuf_small_cache_size": 128, 00:21:44.937 "iobuf_large_cache_size": 16 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_raid_set_options", 00:21:44.937 "params": { 00:21:44.937 "process_window_size_kb": 1024 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_iscsi_set_options", 00:21:44.937 "params": { 00:21:44.937 "timeout_sec": 30 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_nvme_set_options", 00:21:44.937 "params": { 00:21:44.937 "action_on_timeout": "none", 00:21:44.937 "timeout_us": 0, 00:21:44.937 "timeout_admin_us": 0, 00:21:44.937 "keep_alive_timeout_ms": 10000, 00:21:44.937 "arbitration_burst": 0, 00:21:44.937 "low_priority_weight": 0, 00:21:44.937 "medium_priority_weight": 0, 00:21:44.937 "high_priority_weight": 0, 00:21:44.937 "nvme_adminq_poll_period_us": 10000, 00:21:44.937 "nvme_ioq_poll_period_us": 0, 00:21:44.937 "io_queue_requests": 0, 00:21:44.937 "delay_cmd_submit": true, 00:21:44.937 "transport_retry_count": 4, 00:21:44.937 "bdev_retry_count": 3, 00:21:44.937 "transport_ack_timeout": 0, 00:21:44.937 "ctrlr_loss_timeout_sec": 0, 00:21:44.937 "reconnect_delay_sec": 0, 00:21:44.937 "fast_io_fail_timeout_sec": 0, 00:21:44.937 "disable_auto_failback": false, 00:21:44.937 "generate_uuids": false, 00:21:44.937 "transport_tos": 0, 00:21:44.937 "nvme_error_stat": false, 00:21:44.937 "rdma_srq_size": 0, 00:21:44.937 "io_path_stat": false, 00:21:44.937 "allow_accel_sequence": false, 00:21:44.937 "rdma_max_cq_size": 0, 00:21:44.937 "rdma_cm_event_timeout_ms": 0, 00:21:44.937 "dhchap_digests": [ 00:21:44.937 "sha256", 00:21:44.937 "sha384", 00:21:44.937 "sha512" 00:21:44.937 ], 00:21:44.937 "dhchap_dhgroups": [ 00:21:44.937 "null", 00:21:44.937 "ffdhe2048", 00:21:44.937 "ffdhe3072", 00:21:44.937 "ffdhe4096", 00:21:44.937 "ffdhe6144", 00:21:44.937 "ffdhe8192" 00:21:44.937 ] 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_nvme_set_hotplug", 00:21:44.937 "params": { 00:21:44.937 "period_us": 100000, 00:21:44.937 "enable": false 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_malloc_create", 00:21:44.937 "params": { 00:21:44.937 "name": "malloc0", 00:21:44.937 "num_blocks": 8192, 00:21:44.937 "block_size": 4096, 00:21:44.937 "physical_block_size": 4096, 00:21:44.937 "uuid": "44f886d2-2c96-41ce-8aa0-300c2998434d", 00:21:44.937 "optimal_io_boundary": 0 00:21:44.937 } 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "method": "bdev_wait_for_examine" 00:21:44.937 } 00:21:44.937 ] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "nbd", 00:21:44.937 "config": [] 00:21:44.937 }, 00:21:44.937 { 00:21:44.937 "subsystem": "scheduler", 00:21:44.937 "config": [ 00:21:44.937 { 00:21:44.937 "method": "framework_set_scheduler", 00:21:44.938 "params": { 00:21:44.938 "name": "static" 00:21:44.938 } 00:21:44.938 } 00:21:44.938 ] 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "subsystem": "nvmf", 00:21:44.938 "config": [ 00:21:44.938 { 00:21:44.938 "method": "nvmf_set_config", 00:21:44.938 "params": { 00:21:44.938 "discovery_filter": "match_any", 00:21:44.938 "admin_cmd_passthru": { 00:21:44.938 "identify_ctrlr": false 00:21:44.938 } 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_set_max_subsystems", 00:21:44.938 "params": { 00:21:44.938 "max_subsystems": 1024 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_set_crdt", 00:21:44.938 "params": { 00:21:44.938 "crdt1": 0, 00:21:44.938 "crdt2": 0, 00:21:44.938 "crdt3": 0 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_create_transport", 00:21:44.938 "params": { 00:21:44.938 "trtype": "TCP", 00:21:44.938 "max_queue_depth": 128, 00:21:44.938 "max_io_qpairs_per_ctrlr": 127, 00:21:44.938 "in_capsule_data_size": 4096, 00:21:44.938 "max_io_size": 131072, 00:21:44.938 "io_unit_size": 131072, 00:21:44.938 "max_aq_depth": 128, 00:21:44.938 "num_shared_buffers": 511, 00:21:44.938 "buf_cache_size": 4294967295, 00:21:44.938 "dif_insert_or_strip": false, 00:21:44.938 "zcopy": false, 00:21:44.938 "c2h_success": false, 00:21:44.938 "sock_priority": 0, 00:21:44.938 "abort_timeout_sec": 1, 00:21:44.938 "ack_timeout": 0, 00:21:44.938 "data_wr_pool_size": 0 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_create_subsystem", 00:21:44.938 "params": { 00:21:44.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.938 "allow_any_host": false, 00:21:44.938 "serial_number": "SPDK00000000000001", 00:21:44.938 "model_number": "SPDK bdev Controller", 00:21:44.938 "max_namespaces": 10, 00:21:44.938 "min_cntlid": 1, 00:21:44.938 "max_cntlid": 65519, 00:21:44.938 "ana_reporting": false 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_subsystem_add_host", 00:21:44.938 "params": { 00:21:44.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.938 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.938 "psk": "/tmp/tmp.Xz9RBDbITt" 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_subsystem_add_ns", 00:21:44.938 "params": { 00:21:44.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.938 "namespace": { 00:21:44.938 "nsid": 1, 00:21:44.938 "bdev_name": "malloc0", 00:21:44.938 "nguid": "44F886D22C9641CE8AA0300C2998434D", 00:21:44.938 "uuid": "44f886d2-2c96-41ce-8aa0-300c2998434d", 00:21:44.938 "no_auto_visible": false 00:21:44.938 } 00:21:44.938 } 00:21:44.938 }, 00:21:44.938 { 00:21:44.938 "method": "nvmf_subsystem_add_listener", 00:21:44.938 "params": { 00:21:44.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.938 "listen_address": { 00:21:44.938 "trtype": "TCP", 00:21:44.938 "adrfam": "IPv4", 00:21:44.938 "traddr": "10.0.0.2", 00:21:44.938 "trsvcid": "4420" 00:21:44.938 }, 00:21:44.938 "secure_channel": true 00:21:44.938 } 00:21:44.938 } 00:21:44.938 ] 00:21:44.938 } 00:21:44.938 ] 00:21:44.938 }' 00:21:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72454 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72454 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72454 ']' 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.938 13:59:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:44.938 [2024-05-15 13:59:43.435358] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:44.938 [2024-05-15 13:59:43.435440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.196 [2024-05-15 13:59:43.581470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.196 [2024-05-15 13:59:43.733892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.196 [2024-05-15 13:59:43.733955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.196 [2024-05-15 13:59:43.733966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.196 [2024-05-15 13:59:43.733975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.196 [2024-05-15 13:59:43.733982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.196 [2024-05-15 13:59:43.734075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.457 [2024-05-15 13:59:44.000843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.457 [2024-05-15 13:59:44.016757] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:45.719 [2024-05-15 13:59:44.032688] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:45.719 [2024-05-15 13:59:44.032764] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.719 [2024-05-15 13:59:44.032976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.719 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.719 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:45.719 13:59:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.719 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.719 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72485 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72485 /var/tmp/bdevperf.sock 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72485 ']' 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:45.979 "subsystems": [ 00:21:45.979 { 00:21:45.979 "subsystem": "keyring", 00:21:45.979 "config": [] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "iobuf", 00:21:45.979 "config": [ 00:21:45.979 { 00:21:45.979 "method": "iobuf_set_options", 00:21:45.979 "params": { 00:21:45.979 "small_pool_count": 8192, 00:21:45.979 "large_pool_count": 1024, 00:21:45.979 "small_bufsize": 8192, 00:21:45.979 "large_bufsize": 135168 00:21:45.979 } 00:21:45.979 } 00:21:45.979 ] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "sock", 00:21:45.979 "config": [ 00:21:45.979 { 00:21:45.979 "method": "sock_impl_set_options", 00:21:45.979 "params": { 00:21:45.979 "impl_name": "uring", 00:21:45.979 "recv_buf_size": 2097152, 00:21:45.979 "send_buf_size": 2097152, 00:21:45.979 "enable_recv_pipe": true, 00:21:45.979 "enable_quickack": false, 00:21:45.979 "enable_placement_id": 0, 00:21:45.979 "enable_zerocopy_send_server": false, 00:21:45.979 "enable_zerocopy_send_client": false, 00:21:45.979 "zerocopy_threshold": 0, 00:21:45.979 "tls_version": 0, 00:21:45.979 "enable_ktls": false 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "sock_impl_set_options", 00:21:45.979 "params": { 00:21:45.979 "impl_name": "posix", 00:21:45.979 "recv_buf_size": 2097152, 00:21:45.979 "send_buf_size": 2097152, 00:21:45.979 "enable_recv_pipe": true, 00:21:45.979 "enable_quickack": false, 00:21:45.979 "enable_placement_id": 0, 00:21:45.979 "enable_zerocopy_send_server": true, 00:21:45.979 "enable_zerocopy_send_client": false, 00:21:45.979 "zerocopy_threshold": 0, 00:21:45.979 "tls_version": 0, 00:21:45.979 "enable_ktls": false 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "sock_impl_set_options", 00:21:45.979 "params": { 00:21:45.979 "impl_name": "ssl", 00:21:45.979 "recv_buf_size": 4096, 00:21:45.979 "send_buf_size": 4096, 00:21:45.979 "enable_recv_pipe": true, 00:21:45.979 "enable_quickack": false, 00:21:45.979 "enable_placement_id": 0, 00:21:45.979 "enable_zerocopy_send_server": true, 00:21:45.979 "enable_zerocopy_send_client": false, 00:21:45.979 "zerocopy_threshold": 0, 00:21:45.979 "tls_version": 0, 00:21:45.979 "enable_ktls": false 00:21:45.979 } 00:21:45.979 } 00:21:45.979 ] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "vmd", 00:21:45.979 "config": [] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "accel", 00:21:45.979 "config": [ 00:21:45.979 { 00:21:45.979 "method": "accel_set_options", 00:21:45.979 "params": { 00:21:45.979 "small_cache_size": 128, 00:21:45.979 "large_cache_size": 16, 00:21:45.979 "task_count": 2048, 00:21:45.979 "sequence_count": 2048, 00:21:45.979 "buf_count": 2048 00:21:45.979 } 00:21:45.979 } 00:21:45.979 ] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "bdev", 00:21:45.979 "config": [ 00:21:45.979 { 00:21:45.979 "method": "bdev_set_options", 00:21:45.979 "params": { 00:21:45.979 "bdev_io_pool_size": 65535, 00:21:45.979 "bdev_io_cache_size": 256, 00:21:45.979 "bdev_auto_examine": true, 00:21:45.979 "iobuf_small_cache_size": 128, 00:21:45.979 "iobuf_large_cache_size": 16 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_raid_set_options", 00:21:45.979 "params": { 00:21:45.979 "process_window_size_kb": 1024 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_iscsi_set_options", 00:21:45.979 "params": { 00:21:45.979 "timeout_sec": 30 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_nvme_set_options", 00:21:45.979 "params": { 00:21:45.979 "action_on_timeout": "none", 00:21:45.979 "timeout_us": 0, 00:21:45.979 "timeout_admin_us": 0, 00:21:45.979 "keep_alive_timeout_ms": 10000, 00:21:45.979 "arbitration_burst": 0, 00:21:45.979 "low_priority_weight": 0, 00:21:45.979 "medium_priority_weight": 0, 00:21:45.979 "high_priority_weight": 0, 00:21:45.979 "nvme_adminq_poll_period_us": 10000, 00:21:45.979 "nvme_ioq_poll_period_us": 0, 00:21:45.979 "io_queue_requests": 512, 00:21:45.979 "delay_cmd_submit": true, 00:21:45.979 "transport_retry_count": 4, 00:21:45.979 "bdev_retry_count": 3, 00:21:45.979 "transport_ack_timeout": 0, 00:21:45.979 "ctrlr_loss_timeout_sec": 0, 00:21:45.979 "reconnect_delay_sec": 0, 00:21:45.979 "fast_io_fail_timeout_sec": 0, 00:21:45.979 "disable_auto_failback": false, 00:21:45.979 "generate_uuids": false, 00:21:45.979 "transport_tos": 0, 00:21:45.979 "nvme_error_stat": false, 00:21:45.979 "rdma_srq_size": 0, 00:21:45.979 "io_path_stat": false, 00:21:45.979 "allow_accel_sequence": false, 00:21:45.979 "rdma_max_cq_size": 0, 00:21:45.979 "rdma_cm_event_timeout_ms": 0, 00:21:45.979 "dhchap_digests": [ 00:21:45.979 "sha256", 00:21:45.979 "sha384", 00:21:45.979 "sha512" 00:21:45.979 ], 00:21:45.979 "dhchap_dhgroups": [ 00:21:45.979 "null", 00:21:45.979 "ffdhe2048", 00:21:45.979 "ffdhe3072", 00:21:45.979 "ffdhe4096", 00:21:45.979 "ffdhe6144", 00:21:45.979 "ffdhe8192" 00:21:45.979 ] 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_nvme_attach_controller", 00:21:45.979 "params": { 00:21:45.979 "name": "TLSTEST", 00:21:45.979 "trtype": "TCP", 00:21:45.979 "adrfam": "IPv4", 00:21:45.979 "traddr": "10.0.0.2", 00:21:45.979 "trsvcid": "4420", 00:21:45.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.979 "prchk_reftag": false, 00:21:45.979 "prchk_guard": false, 00:21:45.979 "ctrlr_loss_timeout_sec": 0, 00:21:45.979 "reconnect_delay_sec": 0, 00:21:45.979 "fast_io_fail_timeout_sec": 0, 00:21:45.979 "psk": "/tmp/tmp.Xz9RBDbITt", 00:21:45.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.979 "hdgst": false, 00:21:45.979 "ddgst": false 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_nvme_set_hotplug", 00:21:45.979 "params": { 00:21:45.979 "period_us": 100000, 00:21:45.979 "enable": false 00:21:45.979 } 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "method": "bdev_wait_for_examine" 00:21:45.979 } 00:21:45.979 ] 00:21:45.979 }, 00:21:45.979 { 00:21:45.979 "subsystem": "nbd", 00:21:45.979 "config": [] 00:21:45.979 } 00:21:45.979 ] 00:21:45.979 }' 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.979 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:45.980 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.980 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:45.980 13:59:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.980 [2024-05-15 13:59:44.379523] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:45.980 [2024-05-15 13:59:44.379774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72485 ] 00:21:45.980 [2024-05-15 13:59:44.524844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.238 [2024-05-15 13:59:44.630050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.238 [2024-05-15 13:59:44.777379] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.238 [2024-05-15 13:59:44.777480] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.804 13:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:46.804 13:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:46.804 13:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:46.804 Running I/O for 10 seconds... 00:21:59.007 00:21:59.007 Latency(us) 00:21:59.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.007 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.007 Verification LBA range: start 0x0 length 0x2000 00:21:59.007 TLSTESTn1 : 10.01 4838.37 18.90 0.00 0.00 26411.22 6158.80 26109.12 00:21:59.007 =================================================================================================================== 00:21:59.007 Total : 4838.37 18.90 0.00 0.00 26411.22 6158.80 26109.12 00:21:59.007 0 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 72485 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72485 ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72485 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72485 00:21:59.007 killing process with pid 72485 00:21:59.007 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.007 00:21:59.007 Latency(us) 00:21:59.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.007 =================================================================================================================== 00:21:59.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72485' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72485 00:21:59.007 [2024-05-15 13:59:55.416942] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72485 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 72454 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72454 ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72454 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72454 00:21:59.007 killing process with pid 72454 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72454' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72454 00:21:59.007 [2024-05-15 13:59:55.653916] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:59.007 [2024-05-15 13:59:55.653963] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72454 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72619 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72619 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72619 ']' 00:21:59.007 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.008 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:59.008 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.008 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:59.008 13:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.008 [2024-05-15 13:59:55.950457] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:21:59.008 [2024-05-15 13:59:55.950537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.008 [2024-05-15 13:59:56.094250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.008 [2024-05-15 13:59:56.193954] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.008 [2024-05-15 13:59:56.194007] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.008 [2024-05-15 13:59:56.194018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.008 [2024-05-15 13:59:56.194027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.008 [2024-05-15 13:59:56.194037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.008 [2024-05-15 13:59:56.194069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Xz9RBDbITt 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Xz9RBDbITt 00:21:59.008 13:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.008 [2024-05-15 13:59:57.050517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.008 13:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.008 13:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.267 [2024-05-15 13:59:57.573764] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:59.267 [2024-05-15 13:59:57.573869] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.267 [2024-05-15 13:59:57.574055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.267 13:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.525 malloc0 00:21:59.525 13:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.783 13:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xz9RBDbITt 00:21:59.783 [2024-05-15 13:59:58.338368] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=72674 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 72674 /var/tmp/bdevperf.sock 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72674 ']' 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:00.042 13:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.042 [2024-05-15 13:59:58.406996] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:00.042 [2024-05-15 13:59:58.407245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:22:00.042 [2024-05-15 13:59:58.553431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.300 [2024-05-15 13:59:58.665435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.867 13:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:00.867 13:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:00.867 13:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xz9RBDbITt 00:22:01.125 13:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:01.383 [2024-05-15 13:59:59.709421] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.383 nvme0n1 00:22:01.383 13:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.383 Running I/O for 1 seconds... 00:22:02.784 00:22:02.784 Latency(us) 00:22:02.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:02.784 Verification LBA range: start 0x0 length 0x2000 00:22:02.784 nvme0n1 : 1.01 5187.93 20.27 0.00 0.00 24480.36 4974.42 19055.45 00:22:02.784 =================================================================================================================== 00:22:02.784 Total : 5187.93 20.27 0.00 0.00 24480.36 4974.42 19055.45 00:22:02.784 0 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 72674 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72674 ']' 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72674 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72674 00:22:02.784 killing process with pid 72674 00:22:02.784 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.784 00:22:02.784 Latency(us) 00:22:02.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.784 =================================================================================================================== 00:22:02.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72674' 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72674 00:22:02.784 14:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72674 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 72619 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72619 ']' 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72619 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72619 00:22:02.784 killing process with pid 72619 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72619' 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72619 00:22:02.784 [2024-05-15 14:00:01.190911] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:02.784 [2024-05-15 14:00:01.190953] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:02.784 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72619 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72727 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72727 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72727 ']' 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:03.041 14:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.041 [2024-05-15 14:00:01.466522] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:03.041 [2024-05-15 14:00:01.466606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.041 [2024-05-15 14:00:01.596913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.297 [2024-05-15 14:00:01.713695] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.297 [2024-05-15 14:00:01.713767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.297 [2024-05-15 14:00:01.713778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.297 [2024-05-15 14:00:01.713787] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.297 [2024-05-15 14:00:01.713795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.297 [2024-05-15 14:00:01.713822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.858 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.858 [2024-05-15 14:00:02.407335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.115 malloc0 00:22:04.115 [2024-05-15 14:00:02.436494] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:04.115 [2024-05-15 14:00:02.436571] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.115 [2024-05-15 14:00:02.436764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=72758 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 72758 /var/tmp/bdevperf.sock 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72758 ']' 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.115 14:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.115 [2024-05-15 14:00:02.512791] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:04.115 [2024-05-15 14:00:02.512887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:22:04.115 [2024-05-15 14:00:02.651466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.371 [2024-05-15 14:00:02.782993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.936 14:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.936 14:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:04.936 14:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xz9RBDbITt 00:22:05.193 14:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:05.452 [2024-05-15 14:00:03.858084] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.452 nvme0n1 00:22:05.452 14:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.711 Running I/O for 1 seconds... 00:22:06.644 00:22:06.644 Latency(us) 00:22:06.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.644 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:06.644 Verification LBA range: start 0x0 length 0x2000 00:22:06.644 nvme0n1 : 1.01 5228.05 20.42 0.00 0.00 24282.60 5106.02 18107.94 00:22:06.644 =================================================================================================================== 00:22:06.644 Total : 5228.05 20.42 0.00 0.00 24282.60 5106.02 18107.94 00:22:06.644 0 00:22:06.644 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:06.644 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.644 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.902 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.902 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:06.902 "subsystems": [ 00:22:06.902 { 00:22:06.902 "subsystem": "keyring", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "keyring_file_add_key", 00:22:06.902 "params": { 00:22:06.902 "name": "key0", 00:22:06.902 "path": "/tmp/tmp.Xz9RBDbITt" 00:22:06.902 } 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "iobuf", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "iobuf_set_options", 00:22:06.902 "params": { 00:22:06.902 "small_pool_count": 8192, 00:22:06.902 "large_pool_count": 1024, 00:22:06.902 "small_bufsize": 8192, 00:22:06.902 "large_bufsize": 135168 00:22:06.902 } 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "sock", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "sock_impl_set_options", 00:22:06.902 "params": { 00:22:06.902 "impl_name": "uring", 00:22:06.902 "recv_buf_size": 2097152, 00:22:06.902 "send_buf_size": 2097152, 00:22:06.902 "enable_recv_pipe": true, 00:22:06.902 "enable_quickack": false, 00:22:06.902 "enable_placement_id": 0, 00:22:06.902 "enable_zerocopy_send_server": false, 00:22:06.902 "enable_zerocopy_send_client": false, 00:22:06.902 "zerocopy_threshold": 0, 00:22:06.902 "tls_version": 0, 00:22:06.902 "enable_ktls": false 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "sock_impl_set_options", 00:22:06.902 "params": { 00:22:06.902 "impl_name": "posix", 00:22:06.902 "recv_buf_size": 2097152, 00:22:06.902 "send_buf_size": 2097152, 00:22:06.902 "enable_recv_pipe": true, 00:22:06.902 "enable_quickack": false, 00:22:06.902 "enable_placement_id": 0, 00:22:06.902 "enable_zerocopy_send_server": true, 00:22:06.902 "enable_zerocopy_send_client": false, 00:22:06.902 "zerocopy_threshold": 0, 00:22:06.902 "tls_version": 0, 00:22:06.902 "enable_ktls": false 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "sock_impl_set_options", 00:22:06.902 "params": { 00:22:06.902 "impl_name": "ssl", 00:22:06.902 "recv_buf_size": 4096, 00:22:06.902 "send_buf_size": 4096, 00:22:06.902 "enable_recv_pipe": true, 00:22:06.902 "enable_quickack": false, 00:22:06.902 "enable_placement_id": 0, 00:22:06.902 "enable_zerocopy_send_server": true, 00:22:06.902 "enable_zerocopy_send_client": false, 00:22:06.902 "zerocopy_threshold": 0, 00:22:06.902 "tls_version": 0, 00:22:06.902 "enable_ktls": false 00:22:06.902 } 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "vmd", 00:22:06.902 "config": [] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "accel", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "accel_set_options", 00:22:06.902 "params": { 00:22:06.902 "small_cache_size": 128, 00:22:06.902 "large_cache_size": 16, 00:22:06.902 "task_count": 2048, 00:22:06.902 "sequence_count": 2048, 00:22:06.902 "buf_count": 2048 00:22:06.902 } 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "bdev", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "bdev_set_options", 00:22:06.902 "params": { 00:22:06.902 "bdev_io_pool_size": 65535, 00:22:06.902 "bdev_io_cache_size": 256, 00:22:06.902 "bdev_auto_examine": true, 00:22:06.902 "iobuf_small_cache_size": 128, 00:22:06.902 "iobuf_large_cache_size": 16 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_raid_set_options", 00:22:06.902 "params": { 00:22:06.902 "process_window_size_kb": 1024 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_iscsi_set_options", 00:22:06.902 "params": { 00:22:06.902 "timeout_sec": 30 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_nvme_set_options", 00:22:06.902 "params": { 00:22:06.902 "action_on_timeout": "none", 00:22:06.902 "timeout_us": 0, 00:22:06.902 "timeout_admin_us": 0, 00:22:06.902 "keep_alive_timeout_ms": 10000, 00:22:06.902 "arbitration_burst": 0, 00:22:06.902 "low_priority_weight": 0, 00:22:06.902 "medium_priority_weight": 0, 00:22:06.902 "high_priority_weight": 0, 00:22:06.902 "nvme_adminq_poll_period_us": 10000, 00:22:06.902 "nvme_ioq_poll_period_us": 0, 00:22:06.902 "io_queue_requests": 0, 00:22:06.902 "delay_cmd_submit": true, 00:22:06.902 "transport_retry_count": 4, 00:22:06.902 "bdev_retry_count": 3, 00:22:06.902 "transport_ack_timeout": 0, 00:22:06.902 "ctrlr_loss_timeout_sec": 0, 00:22:06.902 "reconnect_delay_sec": 0, 00:22:06.902 "fast_io_fail_timeout_sec": 0, 00:22:06.902 "disable_auto_failback": false, 00:22:06.902 "generate_uuids": false, 00:22:06.902 "transport_tos": 0, 00:22:06.902 "nvme_error_stat": false, 00:22:06.902 "rdma_srq_size": 0, 00:22:06.902 "io_path_stat": false, 00:22:06.902 "allow_accel_sequence": false, 00:22:06.902 "rdma_max_cq_size": 0, 00:22:06.902 "rdma_cm_event_timeout_ms": 0, 00:22:06.902 "dhchap_digests": [ 00:22:06.902 "sha256", 00:22:06.902 "sha384", 00:22:06.902 "sha512" 00:22:06.902 ], 00:22:06.902 "dhchap_dhgroups": [ 00:22:06.902 "null", 00:22:06.902 "ffdhe2048", 00:22:06.902 "ffdhe3072", 00:22:06.902 "ffdhe4096", 00:22:06.902 "ffdhe6144", 00:22:06.902 "ffdhe8192" 00:22:06.902 ] 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_nvme_set_hotplug", 00:22:06.902 "params": { 00:22:06.902 "period_us": 100000, 00:22:06.902 "enable": false 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_malloc_create", 00:22:06.902 "params": { 00:22:06.902 "name": "malloc0", 00:22:06.902 "num_blocks": 8192, 00:22:06.902 "block_size": 4096, 00:22:06.902 "physical_block_size": 4096, 00:22:06.902 "uuid": "487889d2-bdcd-41dd-bba6-3679c4642096", 00:22:06.902 "optimal_io_boundary": 0 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "bdev_wait_for_examine" 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "nbd", 00:22:06.902 "config": [] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "scheduler", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "framework_set_scheduler", 00:22:06.902 "params": { 00:22:06.902 "name": "static" 00:22:06.902 } 00:22:06.902 } 00:22:06.902 ] 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "subsystem": "nvmf", 00:22:06.902 "config": [ 00:22:06.902 { 00:22:06.902 "method": "nvmf_set_config", 00:22:06.902 "params": { 00:22:06.902 "discovery_filter": "match_any", 00:22:06.902 "admin_cmd_passthru": { 00:22:06.902 "identify_ctrlr": false 00:22:06.902 } 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "nvmf_set_max_subsystems", 00:22:06.902 "params": { 00:22:06.902 "max_subsystems": 1024 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "nvmf_set_crdt", 00:22:06.902 "params": { 00:22:06.902 "crdt1": 0, 00:22:06.902 "crdt2": 0, 00:22:06.902 "crdt3": 0 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "nvmf_create_transport", 00:22:06.902 "params": { 00:22:06.902 "trtype": "TCP", 00:22:06.902 "max_queue_depth": 128, 00:22:06.902 "max_io_qpairs_per_ctrlr": 127, 00:22:06.902 "in_capsule_data_size": 4096, 00:22:06.902 "max_io_size": 131072, 00:22:06.902 "io_unit_size": 131072, 00:22:06.902 "max_aq_depth": 128, 00:22:06.902 "num_shared_buffers": 511, 00:22:06.902 "buf_cache_size": 4294967295, 00:22:06.902 "dif_insert_or_strip": false, 00:22:06.902 "zcopy": false, 00:22:06.902 "c2h_success": false, 00:22:06.902 "sock_priority": 0, 00:22:06.902 "abort_timeout_sec": 1, 00:22:06.902 "ack_timeout": 0, 00:22:06.902 "data_wr_pool_size": 0 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "nvmf_create_subsystem", 00:22:06.902 "params": { 00:22:06.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.902 "allow_any_host": false, 00:22:06.902 "serial_number": "00000000000000000000", 00:22:06.902 "model_number": "SPDK bdev Controller", 00:22:06.902 "max_namespaces": 32, 00:22:06.902 "min_cntlid": 1, 00:22:06.902 "max_cntlid": 65519, 00:22:06.902 "ana_reporting": false 00:22:06.902 } 00:22:06.902 }, 00:22:06.902 { 00:22:06.902 "method": "nvmf_subsystem_add_host", 00:22:06.902 "params": { 00:22:06.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.903 "host": "nqn.2016-06.io.spdk:host1", 00:22:06.903 "psk": "key0" 00:22:06.903 } 00:22:06.903 }, 00:22:06.903 { 00:22:06.903 "method": "nvmf_subsystem_add_ns", 00:22:06.903 "params": { 00:22:06.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.903 "namespace": { 00:22:06.903 "nsid": 1, 00:22:06.903 "bdev_name": "malloc0", 00:22:06.903 "nguid": "487889D2BDCD41DDBBA63679C4642096", 00:22:06.903 "uuid": "487889d2-bdcd-41dd-bba6-3679c4642096", 00:22:06.903 "no_auto_visible": false 00:22:06.903 } 00:22:06.903 } 00:22:06.903 }, 00:22:06.903 { 00:22:06.903 "method": "nvmf_subsystem_add_listener", 00:22:06.903 "params": { 00:22:06.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.903 "listen_address": { 00:22:06.903 "trtype": "TCP", 00:22:06.903 "adrfam": "IPv4", 00:22:06.903 "traddr": "10.0.0.2", 00:22:06.903 "trsvcid": "4420" 00:22:06.903 }, 00:22:06.903 "secure_channel": true 00:22:06.903 } 00:22:06.903 } 00:22:06.903 ] 00:22:06.903 } 00:22:06.903 ] 00:22:06.903 }' 00:22:06.903 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:07.160 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:07.160 "subsystems": [ 00:22:07.160 { 00:22:07.160 "subsystem": "keyring", 00:22:07.160 "config": [ 00:22:07.160 { 00:22:07.160 "method": "keyring_file_add_key", 00:22:07.160 "params": { 00:22:07.160 "name": "key0", 00:22:07.160 "path": "/tmp/tmp.Xz9RBDbITt" 00:22:07.160 } 00:22:07.160 } 00:22:07.160 ] 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "subsystem": "iobuf", 00:22:07.160 "config": [ 00:22:07.160 { 00:22:07.160 "method": "iobuf_set_options", 00:22:07.160 "params": { 00:22:07.160 "small_pool_count": 8192, 00:22:07.160 "large_pool_count": 1024, 00:22:07.160 "small_bufsize": 8192, 00:22:07.160 "large_bufsize": 135168 00:22:07.160 } 00:22:07.160 } 00:22:07.160 ] 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "subsystem": "sock", 00:22:07.160 "config": [ 00:22:07.160 { 00:22:07.160 "method": "sock_impl_set_options", 00:22:07.160 "params": { 00:22:07.160 "impl_name": "uring", 00:22:07.160 "recv_buf_size": 2097152, 00:22:07.160 "send_buf_size": 2097152, 00:22:07.160 "enable_recv_pipe": true, 00:22:07.160 "enable_quickack": false, 00:22:07.160 "enable_placement_id": 0, 00:22:07.160 "enable_zerocopy_send_server": false, 00:22:07.160 "enable_zerocopy_send_client": false, 00:22:07.160 "zerocopy_threshold": 0, 00:22:07.160 "tls_version": 0, 00:22:07.160 "enable_ktls": false 00:22:07.160 } 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "method": "sock_impl_set_options", 00:22:07.160 "params": { 00:22:07.160 "impl_name": "posix", 00:22:07.160 "recv_buf_size": 2097152, 00:22:07.160 "send_buf_size": 2097152, 00:22:07.160 "enable_recv_pipe": true, 00:22:07.160 "enable_quickack": false, 00:22:07.160 "enable_placement_id": 0, 00:22:07.160 "enable_zerocopy_send_server": true, 00:22:07.160 "enable_zerocopy_send_client": false, 00:22:07.160 "zerocopy_threshold": 0, 00:22:07.160 "tls_version": 0, 00:22:07.160 "enable_ktls": false 00:22:07.160 } 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "method": "sock_impl_set_options", 00:22:07.160 "params": { 00:22:07.160 "impl_name": "ssl", 00:22:07.160 "recv_buf_size": 4096, 00:22:07.160 "send_buf_size": 4096, 00:22:07.160 "enable_recv_pipe": true, 00:22:07.160 "enable_quickack": false, 00:22:07.160 "enable_placement_id": 0, 00:22:07.160 "enable_zerocopy_send_server": true, 00:22:07.160 "enable_zerocopy_send_client": false, 00:22:07.160 "zerocopy_threshold": 0, 00:22:07.160 "tls_version": 0, 00:22:07.160 "enable_ktls": false 00:22:07.160 } 00:22:07.160 } 00:22:07.160 ] 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "subsystem": "vmd", 00:22:07.160 "config": [] 00:22:07.160 }, 00:22:07.160 { 00:22:07.160 "subsystem": "accel", 00:22:07.160 "config": [ 00:22:07.160 { 00:22:07.160 "method": "accel_set_options", 00:22:07.160 "params": { 00:22:07.160 "small_cache_size": 128, 00:22:07.160 "large_cache_size": 16, 00:22:07.160 "task_count": 2048, 00:22:07.160 "sequence_count": 2048, 00:22:07.161 "buf_count": 2048 00:22:07.161 } 00:22:07.161 } 00:22:07.161 ] 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "subsystem": "bdev", 00:22:07.161 "config": [ 00:22:07.161 { 00:22:07.161 "method": "bdev_set_options", 00:22:07.161 "params": { 00:22:07.161 "bdev_io_pool_size": 65535, 00:22:07.161 "bdev_io_cache_size": 256, 00:22:07.161 "bdev_auto_examine": true, 00:22:07.161 "iobuf_small_cache_size": 128, 00:22:07.161 "iobuf_large_cache_size": 16 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_raid_set_options", 00:22:07.161 "params": { 00:22:07.161 "process_window_size_kb": 1024 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_iscsi_set_options", 00:22:07.161 "params": { 00:22:07.161 "timeout_sec": 30 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_nvme_set_options", 00:22:07.161 "params": { 00:22:07.161 "action_on_timeout": "none", 00:22:07.161 "timeout_us": 0, 00:22:07.161 "timeout_admin_us": 0, 00:22:07.161 "keep_alive_timeout_ms": 10000, 00:22:07.161 "arbitration_burst": 0, 00:22:07.161 "low_priority_weight": 0, 00:22:07.161 "medium_priority_weight": 0, 00:22:07.161 "high_priority_weight": 0, 00:22:07.161 "nvme_adminq_poll_period_us": 10000, 00:22:07.161 "nvme_ioq_poll_period_us": 0, 00:22:07.161 "io_queue_requests": 512, 00:22:07.161 "delay_cmd_submit": true, 00:22:07.161 "transport_retry_count": 4, 00:22:07.161 "bdev_retry_count": 3, 00:22:07.161 "transport_ack_timeout": 0, 00:22:07.161 "ctrlr_loss_timeout_sec": 0, 00:22:07.161 "reconnect_delay_sec": 0, 00:22:07.161 "fast_io_fail_timeout_sec": 0, 00:22:07.161 "disable_auto_failback": false, 00:22:07.161 "generate_uuids": false, 00:22:07.161 "transport_tos": 0, 00:22:07.161 "nvme_error_stat": false, 00:22:07.161 "rdma_srq_size": 0, 00:22:07.161 "io_path_stat": false, 00:22:07.161 "allow_accel_sequence": false, 00:22:07.161 "rdma_max_cq_size": 0, 00:22:07.161 "rdma_cm_event_timeout_ms": 0, 00:22:07.161 "dhchap_digests": [ 00:22:07.161 "sha256", 00:22:07.161 "sha384", 00:22:07.161 "sha512" 00:22:07.161 ], 00:22:07.161 "dhchap_dhgroups": [ 00:22:07.161 "null", 00:22:07.161 "ffdhe2048", 00:22:07.161 "ffdhe3072", 00:22:07.161 "ffdhe4096", 00:22:07.161 "ffdhe6144", 00:22:07.161 "ffdhe8192" 00:22:07.161 ] 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_nvme_attach_controller", 00:22:07.161 "params": { 00:22:07.161 "name": "nvme0", 00:22:07.161 "trtype": "TCP", 00:22:07.161 "adrfam": "IPv4", 00:22:07.161 "traddr": "10.0.0.2", 00:22:07.161 "trsvcid": "4420", 00:22:07.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.161 "prchk_reftag": false, 00:22:07.161 "prchk_guard": false, 00:22:07.161 "ctrlr_loss_timeout_sec": 0, 00:22:07.161 "reconnect_delay_sec": 0, 00:22:07.161 "fast_io_fail_timeout_sec": 0, 00:22:07.161 "psk": "key0", 00:22:07.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.161 "hdgst": false, 00:22:07.161 "ddgst": false 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_nvme_set_hotplug", 00:22:07.161 "params": { 00:22:07.161 "period_us": 100000, 00:22:07.161 "enable": false 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_enable_histogram", 00:22:07.161 "params": { 00:22:07.161 "name": "nvme0n1", 00:22:07.161 "enable": true 00:22:07.161 } 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "method": "bdev_wait_for_examine" 00:22:07.161 } 00:22:07.161 ] 00:22:07.161 }, 00:22:07.161 { 00:22:07.161 "subsystem": "nbd", 00:22:07.161 "config": [] 00:22:07.161 } 00:22:07.161 ] 00:22:07.161 }' 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 72758 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72758 ']' 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72758 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72758 00:22:07.161 killing process with pid 72758 00:22:07.161 Received shutdown signal, test time was about 1.000000 seconds 00:22:07.161 00:22:07.161 Latency(us) 00:22:07.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.161 =================================================================================================================== 00:22:07.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72758' 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72758 00:22:07.161 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72758 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 72727 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72727 ']' 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72727 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72727 00:22:07.419 killing process with pid 72727 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72727' 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72727 00:22:07.419 [2024-05-15 14:00:05.796623] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:07.419 14:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72727 00:22:07.677 14:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:07.677 14:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:07.677 "subsystems": [ 00:22:07.677 { 00:22:07.677 "subsystem": "keyring", 00:22:07.677 "config": [ 00:22:07.677 { 00:22:07.677 "method": "keyring_file_add_key", 00:22:07.677 "params": { 00:22:07.677 "name": "key0", 00:22:07.677 "path": "/tmp/tmp.Xz9RBDbITt" 00:22:07.677 } 00:22:07.677 } 00:22:07.677 ] 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "subsystem": "iobuf", 00:22:07.677 "config": [ 00:22:07.677 { 00:22:07.677 "method": "iobuf_set_options", 00:22:07.677 "params": { 00:22:07.677 "small_pool_count": 8192, 00:22:07.677 "large_pool_count": 1024, 00:22:07.677 "small_bufsize": 8192, 00:22:07.677 "large_bufsize": 135168 00:22:07.677 } 00:22:07.677 } 00:22:07.677 ] 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "subsystem": "sock", 00:22:07.677 "config": [ 00:22:07.677 { 00:22:07.677 "method": "sock_impl_set_options", 00:22:07.677 "params": { 00:22:07.677 "impl_name": "uring", 00:22:07.677 "recv_buf_size": 2097152, 00:22:07.677 "send_buf_size": 2097152, 00:22:07.677 "enable_recv_pipe": true, 00:22:07.677 "enable_quickack": false, 00:22:07.677 "enable_placement_id": 0, 00:22:07.677 "enable_zerocopy_send_server": false, 00:22:07.677 "enable_zerocopy_send_client": false, 00:22:07.677 "zerocopy_threshold": 0, 00:22:07.677 "tls_version": 0, 00:22:07.677 "enable_ktls": false 00:22:07.677 } 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "method": "sock_impl_set_options", 00:22:07.677 "params": { 00:22:07.677 "impl_name": "posix", 00:22:07.677 "recv_buf_size": 2097152, 00:22:07.677 "send_buf_size": 2097152, 00:22:07.677 "enable_recv_pipe": true, 00:22:07.677 "enable_quickack": false, 00:22:07.677 "enable_placement_id": 0, 00:22:07.677 "enable_zerocopy_send_server": true, 00:22:07.677 "enable_zerocopy_send_client": false, 00:22:07.677 "zerocopy_threshold": 0, 00:22:07.677 "tls_version": 0, 00:22:07.677 "enable_ktls": false 00:22:07.677 } 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "method": "sock_impl_set_options", 00:22:07.677 "params": { 00:22:07.677 "impl_name": "ssl", 00:22:07.677 "recv_buf_size": 4096, 00:22:07.677 "send_buf_size": 4096, 00:22:07.677 "enable_recv_pipe": true, 00:22:07.677 "enable_quickack": false, 00:22:07.677 "enable_placement_id": 0, 00:22:07.677 "enable_zerocopy_send_server": true, 00:22:07.677 "enable_zerocopy_send_client": false, 00:22:07.677 "zerocopy_threshold": 0, 00:22:07.677 "tls_version": 0, 00:22:07.677 "enable_ktls": false 00:22:07.677 } 00:22:07.677 } 00:22:07.677 ] 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "subsystem": "vmd", 00:22:07.677 "config": [] 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "subsystem": "accel", 00:22:07.677 "config": [ 00:22:07.677 { 00:22:07.677 "method": "accel_set_options", 00:22:07.677 "params": { 00:22:07.677 "small_cache_size": 128, 00:22:07.677 "large_cache_size": 16, 00:22:07.677 "task_count": 2048, 00:22:07.677 "sequence_count": 2048, 00:22:07.677 "buf_count": 2048 00:22:07.677 } 00:22:07.677 } 00:22:07.677 ] 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "subsystem": "bdev", 00:22:07.677 "config": [ 00:22:07.677 { 00:22:07.677 "method": "bdev_set_options", 00:22:07.677 "params": { 00:22:07.677 "bdev_io_pool_size": 65535, 00:22:07.677 "bdev_io_cache_size": 256, 00:22:07.677 "bdev_auto_examine": true, 00:22:07.677 "iobuf_small_cache_size": 128, 00:22:07.677 "iobuf_large_cache_size": 16 00:22:07.677 } 00:22:07.677 }, 00:22:07.677 { 00:22:07.678 "method": "bdev_raid_set_options", 00:22:07.678 "params": { 00:22:07.678 "process_window_size_kb": 1024 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "bdev_iscsi_set_options", 00:22:07.678 "params": { 00:22:07.678 "timeout_sec": 30 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "bdev_nvme_set_options", 00:22:07.678 "params": { 00:22:07.678 "action_on_timeout": "none", 00:22:07.678 "timeout_us": 0, 00:22:07.678 "timeout_admin_us": 0, 00:22:07.678 "keep_alive_timeout_ms": 10000, 00:22:07.678 "arbitration_burst": 0, 00:22:07.678 "low_priority_weight": 0, 00:22:07.678 "medium_priority_weight": 0, 00:22:07.678 "high_priority_weight": 0, 00:22:07.678 "nvme_adminq_poll_period_us": 10000, 00:22:07.678 "nvme_ioq_poll_period_us": 0, 00:22:07.678 "io_queue_requests": 0, 00:22:07.678 "delay_cmd_submit": true, 00:22:07.678 "transport_retry_count": 4, 00:22:07.678 "bdev_retry_count": 3, 00:22:07.678 "transport_ack_timeout": 0, 00:22:07.678 "ctrlr_loss_timeout_sec": 0, 00:22:07.678 "reconnect_delay_sec": 0, 00:22:07.678 "fast_io_fail_timeout_sec": 0, 00:22:07.678 "disable_auto_failback": false, 00:22:07.678 "generate_uuids": false, 00:22:07.678 "transport_tos": 0, 00:22:07.678 "nvme_error_stat": false, 00:22:07.678 "rdma_srq_size": 0, 00:22:07.678 "io_path_stat": false, 00:22:07.678 "allow_accel_sequence": false, 00:22:07.678 "rdma_max_cq_size": 0, 00:22:07.678 "rdma_cm_event_timeout_ms": 0, 00:22:07.678 "dhchap_digests": [ 00:22:07.678 "sha256", 00:22:07.678 "sha384", 00:22:07.678 "sha512" 00:22:07.678 ], 00:22:07.678 "dhchap_dhgroups": [ 00:22:07.678 "null", 00:22:07.678 "ffdhe2048", 00:22:07.678 "ffdhe3072", 00:22:07.678 "ffdhe4096", 00:22:07.678 "ffdhe6144", 00:22:07.678 "ffdhe8192" 00:22:07.678 ] 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "bdev_nvme_set_hotplug", 00:22:07.678 "params": { 00:22:07.678 "period_us": 100000, 00:22:07.678 "enable": false 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "bdev_malloc_create", 00:22:07.678 "params": { 00:22:07.678 "name": "malloc0", 00:22:07.678 "num_blocks": 8192, 00:22:07.678 "block_size": 4096, 00:22:07.678 "physical_block_size": 4096, 00:22:07.678 "uuid": "487889d2-bdcd-41dd-bba6-3679c4642096", 00:22:07.678 "optimal_io_boundary": 0 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "bdev_wait_for_examine" 00:22:07.678 } 00:22:07.678 ] 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "subsystem": "nbd", 00:22:07.678 "config": [] 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "subsystem": "scheduler", 00:22:07.678 "config": [ 00:22:07.678 { 00:22:07.678 "method": "framework_set_scheduler", 00:22:07.678 "params": { 00:22:07.678 "name": "static" 00:22:07.678 } 00:22:07.678 } 00:22:07.678 ] 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "subsystem": "nvmf", 00:22:07.678 "config": [ 00:22:07.678 { 00:22:07.678 "method": "nvmf_set_config", 00:22:07.678 "params": { 00:22:07.678 "discovery_filter": "match_any", 00:22:07.678 "admin_cmd_passthru": { 00:22:07.678 "identify_ctrlr": false 00:22:07.678 } 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_set_max_subsystems", 00:22:07.678 "params": { 00:22:07.678 "max_subsystems": 1024 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_set_crdt", 00:22:07.678 "params": { 00:22:07.678 "crdt1": 0, 00:22:07.678 "crdt2": 0, 00:22:07.678 "crdt3": 0 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_create_transport", 00:22:07.678 "params": { 00:22:07.678 "trtype": "TCP", 00:22:07.678 "max_queue_depth": 128, 00:22:07.678 "max_io_qpairs_per_ctrlr": 127, 00:22:07.678 "in_capsule_data_size": 4096, 00:22:07.678 "max_io_size": 131072, 00:22:07.678 "io_unit_size": 131072, 00:22:07.678 "max_aq_depth": 128, 00:22:07.678 "num_shared_buffers": 511, 00:22:07.678 "buf_cache_size": 4294967295, 00:22:07.678 "dif_insert_or_strip": false, 00:22:07.678 "zcopy": false, 00:22:07.678 "c2h_success": false, 00:22:07.678 "sock_priority": 0, 00:22:07.678 "abort_timeout_sec": 1, 00:22:07.678 "ack_timeout": 0, 00:22:07.678 "data_wr_pool_size": 0 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_create_subsystem", 00:22:07.678 "params": { 00:22:07.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.678 "allow_any_host": false, 00:22:07.678 "serial_number": "00000000000000000000", 00:22:07.678 "model_number": "SPDK bdev Controller", 00:22:07.678 "max_namespaces": 32, 00:22:07.678 "min_cntlid": 1, 00:22:07.678 "max_cntlid": 65519, 00:22:07.678 "ana_reporting": false 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_subsystem_add_host", 00:22:07.678 "params": { 00:22:07.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.678 "host": "nqn.2016-06.io.spdk:host1", 00:22:07.678 "psk": "key0" 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_subsystem_add_ns", 00:22:07.678 "params": { 00:22:07.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.678 "namespace": { 00:22:07.678 "nsid": 1, 00:22:07.678 "bdev_name": "malloc0", 00:22:07.678 "nguid": "487889D2BDCD41DDBBA63679C4642096", 00:22:07.678 "uuid": "487889d2-bdcd-41dd-bba6-3679c4642096", 00:22:07.678 "no_auto_visible": false 00:22:07.678 } 00:22:07.678 } 00:22:07.678 }, 00:22:07.678 { 00:22:07.678 "method": "nvmf_subsystem_add_listener", 00:22:07.678 "params": { 00:22:07.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.678 "listen_address": { 00:22:07.678 "trtype": "TCP", 00:22:07.678 "adrfam": "IPv4", 00:22:07.678 "traddr": "10.0.0.2", 00:22:07.678 "trsvcid": "4420" 00:22:07.678 }, 00:22:07.678 "secure_channel": true 00:22:07.678 } 00:22:07.678 } 00:22:07.678 ] 00:22:07.678 } 00:22:07.678 ] 00:22:07.678 }' 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72814 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72814 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72814 ']' 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:07.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:07.678 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.678 [2024-05-15 14:00:06.093616] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:07.678 [2024-05-15 14:00:06.093974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.678 [2024-05-15 14:00:06.226183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.939 [2024-05-15 14:00:06.324128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.939 [2024-05-15 14:00:06.324174] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.939 [2024-05-15 14:00:06.324184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.939 [2024-05-15 14:00:06.324193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.939 [2024-05-15 14:00:06.324200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.939 [2024-05-15 14:00:06.324273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.197 [2024-05-15 14:00:06.539953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.197 [2024-05-15 14:00:06.571807] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:08.197 [2024-05-15 14:00:06.571888] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.197 [2024-05-15 14:00:06.572052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.455 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:08.455 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:08.455 14:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.455 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.455 14:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=72845 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 72845 /var/tmp/bdevperf.sock 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72845 ']' 00:22:08.714 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:08.714 "subsystems": [ 00:22:08.714 { 00:22:08.714 "subsystem": "keyring", 00:22:08.714 "config": [ 00:22:08.714 { 00:22:08.714 "method": "keyring_file_add_key", 00:22:08.714 "params": { 00:22:08.714 "name": "key0", 00:22:08.714 "path": "/tmp/tmp.Xz9RBDbITt" 00:22:08.714 } 00:22:08.714 } 00:22:08.714 ] 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "subsystem": "iobuf", 00:22:08.714 "config": [ 00:22:08.714 { 00:22:08.714 "method": "iobuf_set_options", 00:22:08.714 "params": { 00:22:08.714 "small_pool_count": 8192, 00:22:08.714 "large_pool_count": 1024, 00:22:08.714 "small_bufsize": 8192, 00:22:08.714 "large_bufsize": 135168 00:22:08.714 } 00:22:08.714 } 00:22:08.714 ] 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "subsystem": "sock", 00:22:08.714 "config": [ 00:22:08.714 { 00:22:08.714 "method": "sock_impl_set_options", 00:22:08.714 "params": { 00:22:08.714 "impl_name": "uring", 00:22:08.714 "recv_buf_size": 2097152, 00:22:08.714 "send_buf_size": 2097152, 00:22:08.714 "enable_recv_pipe": true, 00:22:08.714 "enable_quickack": false, 00:22:08.714 "enable_placement_id": 0, 00:22:08.714 "enable_zerocopy_send_server": false, 00:22:08.714 "enable_zerocopy_send_client": false, 00:22:08.714 "zerocopy_threshold": 0, 00:22:08.714 "tls_version": 0, 00:22:08.714 "enable_ktls": false 00:22:08.714 } 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "method": "sock_impl_set_options", 00:22:08.714 "params": { 00:22:08.714 "impl_name": "posix", 00:22:08.714 "recv_buf_size": 2097152, 00:22:08.714 "send_buf_size": 2097152, 00:22:08.714 "enable_recv_pipe": true, 00:22:08.714 "enable_quickack": false, 00:22:08.714 "enable_placement_id": 0, 00:22:08.714 "enable_zerocopy_send_server": true, 00:22:08.714 "enable_zerocopy_send_client": false, 00:22:08.714 "zerocopy_threshold": 0, 00:22:08.714 "tls_version": 0, 00:22:08.714 "enable_ktls": false 00:22:08.714 } 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "method": "sock_impl_set_options", 00:22:08.714 "params": { 00:22:08.714 "impl_name": "ssl", 00:22:08.714 "recv_buf_size": 4096, 00:22:08.714 "send_buf_size": 4096, 00:22:08.714 "enable_recv_pipe": true, 00:22:08.714 "enable_quickack": false, 00:22:08.714 "enable_placement_id": 0, 00:22:08.714 "enable_zerocopy_send_server": true, 00:22:08.714 "enable_zerocopy_send_client": false, 00:22:08.714 "zerocopy_threshold": 0, 00:22:08.714 "tls_version": 0, 00:22:08.714 "enable_ktls": false 00:22:08.714 } 00:22:08.714 } 00:22:08.714 ] 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "subsystem": "vmd", 00:22:08.714 "config": [] 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "subsystem": "accel", 00:22:08.714 "config": [ 00:22:08.714 { 00:22:08.714 "method": "accel_set_options", 00:22:08.714 "params": { 00:22:08.714 "small_cache_size": 128, 00:22:08.714 "large_cache_size": 16, 00:22:08.714 "task_count": 2048, 00:22:08.714 "sequence_count": 2048, 00:22:08.714 "buf_count": 2048 00:22:08.714 } 00:22:08.714 } 00:22:08.714 ] 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "subsystem": "bdev", 00:22:08.714 "config": [ 00:22:08.714 { 00:22:08.714 "method": "bdev_set_options", 00:22:08.714 "params": { 00:22:08.714 "bdev_io_pool_size": 65535, 00:22:08.714 "bdev_io_cache_size": 256, 00:22:08.714 "bdev_auto_examine": true, 00:22:08.714 "iobuf_small_cache_size": 128, 00:22:08.714 "iobuf_large_cache_size": 16 00:22:08.714 } 00:22:08.714 }, 00:22:08.714 { 00:22:08.714 "method": "bdev_raid_set_options", 00:22:08.714 "params": { 00:22:08.715 "process_window_size_kb": 1024 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_iscsi_set_options", 00:22:08.715 "params": { 00:22:08.715 "timeout_sec": 30 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_nvme_set_options", 00:22:08.715 "params": { 00:22:08.715 "action_on_timeout": "none", 00:22:08.715 "timeout_us": 0, 00:22:08.715 "timeout_admin_us": 0, 00:22:08.715 "keep_alive_timeout_ms": 10000, 00:22:08.715 "arbitration_burst": 0, 00:22:08.715 "low_priority_weight": 0, 00:22:08.715 "medium_priority_weight": 0, 00:22:08.715 "high_priority_weight": 0, 00:22:08.715 "nvme_adminq_poll_period_us": 10000, 00:22:08.715 "nvme_ioq_poll_period_us": 0, 00:22:08.715 "io_queue_requests": 512, 00:22:08.715 "delay_cmd_submit": true, 00:22:08.715 "transport_retry_count": 4, 00:22:08.715 "bdev_retry_count": 3, 00:22:08.715 "transport_ack_timeout": 0, 00:22:08.715 "ctrlr_loss_timeout_sec": 0, 00:22:08.715 "reconnect_delay_sec": 0, 00:22:08.715 "fast_io_fail_timeout_sec": 0, 00:22:08.715 "disable_auto_failback": false, 00:22:08.715 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.715 "generate_uuids": false, 00:22:08.715 "transport_tos": 0, 00:22:08.715 "nvme_error_stat": false, 00:22:08.715 "rdma_srq_size": 0, 00:22:08.715 "io_path_stat": false, 00:22:08.715 "allow_accel_sequence": false, 00:22:08.715 "rdma_max_cq_size": 0, 00:22:08.715 "rdma_cm_event_timeout_ms": 0, 00:22:08.715 "dhchap_digests": [ 00:22:08.715 "sha256", 00:22:08.715 "sha384", 00:22:08.715 "sha512" 00:22:08.715 ], 00:22:08.715 "dhchap_dhgroups": [ 00:22:08.715 "null", 00:22:08.715 "ffdhe2048", 00:22:08.715 "ffdhe3072", 00:22:08.715 "ffdhe4096", 00:22:08.715 "ffdhe6144", 00:22:08.715 "ffdhe8192" 00:22:08.715 ] 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_nvme_attach_controller", 00:22:08.715 "params": { 00:22:08.715 "name": "nvme0", 00:22:08.715 "trtype": "TCP", 00:22:08.715 "adrfam": "IPv4", 00:22:08.715 "traddr": "10.0.0.2", 00:22:08.715 "trsvcid": "4420", 00:22:08.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.715 "prchk_reftag": false, 00:22:08.715 "prchk_guard": false, 00:22:08.715 "ctrlr_loss_timeout_sec": 0, 00:22:08.715 "reconnect_delay_sec": 0, 00:22:08.715 "fast_io_fail_timeout_sec": 0, 00:22:08.715 "psk": "key0", 00:22:08.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.715 "hdgst": false, 00:22:08.715 "ddgst": false 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_nvme_set_hotplug", 00:22:08.715 "params": { 00:22:08.715 "period_us": 100000, 00:22:08.715 "enable": false 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_enable_histogram", 00:22:08.715 "params": { 00:22:08.715 "name": "nvme0n1", 00:22:08.715 "enable": true 00:22:08.715 } 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "method": "bdev_wait_for_examine" 00:22:08.715 } 00:22:08.715 ] 00:22:08.715 }, 00:22:08.715 { 00:22:08.715 "subsystem": "nbd", 00:22:08.715 "config": [] 00:22:08.715 } 00:22:08.715 ] 00:22:08.715 }' 00:22:08.715 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.715 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.715 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.715 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.715 [2024-05-15 14:00:07.065011] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:08.715 [2024-05-15 14:00:07.065252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72845 ] 00:22:08.715 [2024-05-15 14:00:07.207526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.973 [2024-05-15 14:00:07.342066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.973 [2024-05-15 14:00:07.520198] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.540 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:09.540 14:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:09.540 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.540 14:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:09.799 14:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.799 14:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:09.799 Running I/O for 1 seconds... 00:22:10.749 00:22:10.749 Latency(us) 00:22:10.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.749 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:10.749 Verification LBA range: start 0x0 length 0x2000 00:22:10.749 nvme0n1 : 1.01 5434.50 21.23 0.00 0.00 23369.04 5158.66 17265.71 00:22:10.749 =================================================================================================================== 00:22:10.749 Total : 5434.50 21.23 0.00 0.00 23369.04 5158.66 17265.71 00:22:10.749 0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:11.009 nvmf_trace.0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 72845 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72845 ']' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72845 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72845 00:22:11.009 killing process with pid 72845 00:22:11.009 Received shutdown signal, test time was about 1.000000 seconds 00:22:11.009 00:22:11.009 Latency(us) 00:22:11.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.009 =================================================================================================================== 00:22:11.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72845' 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72845 00:22:11.009 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72845 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.276 rmmod nvme_tcp 00:22:11.276 rmmod nvme_fabrics 00:22:11.276 rmmod nvme_keyring 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 72814 ']' 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 72814 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72814 ']' 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72814 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72814 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72814' 00:22:11.276 killing process with pid 72814 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72814 00:22:11.276 [2024-05-15 14:00:09.782650] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:11.276 14:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72814 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.c9xUykpqdE /tmp/tmp.H4zdTSHLf6 /tmp/tmp.Xz9RBDbITt 00:22:11.541 ************************************ 00:22:11.541 END TEST nvmf_tls 00:22:11.541 ************************************ 00:22:11.541 00:22:11.541 real 1m22.808s 00:22:11.541 user 2m3.760s 00:22:11.541 sys 0m30.747s 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:11.541 14:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.807 14:00:10 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:11.807 14:00:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:11.807 14:00:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:11.807 14:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.807 ************************************ 00:22:11.807 START TEST nvmf_fips 00:22:11.807 ************************************ 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:11.807 * Looking for test storage... 00:22:11.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:11.807 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:11.808 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:12.081 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:12.082 Error setting digest 00:22:12.082 00824C72257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:12.082 00824C72257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:12.082 Cannot find device "nvmf_tgt_br" 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.082 Cannot find device "nvmf_tgt_br2" 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:12.082 Cannot find device "nvmf_tgt_br" 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:12.082 Cannot find device "nvmf_tgt_br2" 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:12.082 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:12.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:22:12.346 00:22:12.346 --- 10.0.0.2 ping statistics --- 00:22:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.346 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:12.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:12.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:12.346 00:22:12.346 --- 10.0.0.3 ping statistics --- 00:22:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.346 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:12.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:12.346 00:22:12.346 --- 10.0.0.1 ping statistics --- 00:22:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.346 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73113 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73113 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 73113 ']' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.346 14:00:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.346 [2024-05-15 14:00:10.902942] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:12.346 [2024-05-15 14:00:10.903016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.617 [2024-05-15 14:00:11.044239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.617 [2024-05-15 14:00:11.129993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.617 [2024-05-15 14:00:11.130036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.617 [2024-05-15 14:00:11.130045] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.617 [2024-05-15 14:00:11.130054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.617 [2024-05-15 14:00:11.130061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.617 [2024-05-15 14:00:11.130084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:13.207 14:00:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:13.474 [2024-05-15 14:00:11.937684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.474 [2024-05-15 14:00:11.953577] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:13.474 [2024-05-15 14:00:11.953633] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.474 [2024-05-15 14:00:11.953802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.474 [2024-05-15 14:00:11.982488] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:13.474 malloc0 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73147 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73147 /var/tmp/bdevperf.sock 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 73147 ']' 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.474 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.739 [2024-05-15 14:00:12.080893] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:13.739 [2024-05-15 14:00:12.080963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73147 ] 00:22:13.739 [2024-05-15 14:00:12.217249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.010 [2024-05-15 14:00:12.308223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.582 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.582 14:00:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:22:14.582 14:00:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:14.582 [2024-05-15 14:00:13.096268] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.582 [2024-05-15 14:00:13.096381] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.851 TLSTESTn1 00:22:14.851 14:00:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.851 Running I/O for 10 seconds... 00:22:24.844 00:22:24.844 Latency(us) 00:22:24.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.844 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.844 Verification LBA range: start 0x0 length 0x2000 00:22:24.844 TLSTESTn1 : 10.01 5594.61 21.85 0.00 0.00 22843.08 4737.54 17160.43 00:22:24.844 =================================================================================================================== 00:22:24.844 Total : 5594.61 21.85 0.00 0.00 22843.08 4737.54 17160.43 00:22:24.844 0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:24.844 nvmf_trace.0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73147 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 73147 ']' 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 73147 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:24.844 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73147 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:25.103 killing process with pid 73147 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73147' 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 73147 00:22:25.103 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.103 00:22:25.103 Latency(us) 00:22:25.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.103 =================================================================================================================== 00:22:25.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.103 [2024-05-15 14:00:23.434347] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 73147 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.103 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.360 rmmod nvme_tcp 00:22:25.360 rmmod nvme_fabrics 00:22:25.360 rmmod nvme_keyring 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73113 ']' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73113 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 73113 ']' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 73113 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73113 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:25.360 killing process with pid 73113 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73113' 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 73113 00:22:25.360 [2024-05-15 14:00:23.804824] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:25.360 [2024-05-15 14:00:23.804856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:25.360 14:00:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 73113 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:25.618 00:22:25.618 real 0m13.974s 00:22:25.618 user 0m18.205s 00:22:25.618 sys 0m6.071s 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:25.618 ************************************ 00:22:25.618 END TEST nvmf_fips 00:22:25.618 14:00:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:25.618 ************************************ 00:22:25.618 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:25.618 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:22:25.618 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:25.618 14:00:24 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.618 14:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.878 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:25.878 14:00:24 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:25.878 14:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.878 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:22:25.878 14:00:24 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:25.878 14:00:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:25.878 14:00:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:25.878 14:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.878 ************************************ 00:22:25.878 START TEST nvmf_identify 00:22:25.878 ************************************ 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:25.878 * Looking for test storage... 00:22:25.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.878 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:25.879 Cannot find device "nvmf_tgt_br" 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:22:25.879 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.137 Cannot find device "nvmf_tgt_br2" 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:26.137 Cannot find device "nvmf_tgt_br" 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:26.137 Cannot find device "nvmf_tgt_br2" 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:26.137 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.138 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:26.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:26.397 00:22:26.397 --- 10.0.0.2 ping statistics --- 00:22:26.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.397 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:26.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:22:26.397 00:22:26.397 --- 10.0.0.3 ping statistics --- 00:22:26.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.397 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:26.397 00:22:26.397 --- 10.0.0.1 ping statistics --- 00:22:26.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.397 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73491 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73491 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 73491 ']' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.397 14:00:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.397 [2024-05-15 14:00:24.893081] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:26.397 [2024-05-15 14:00:24.893158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.655 [2024-05-15 14:00:25.034118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.655 [2024-05-15 14:00:25.135615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.655 [2024-05-15 14:00:25.135662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.655 [2024-05-15 14:00:25.135672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.655 [2024-05-15 14:00:25.135681] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.655 [2024-05-15 14:00:25.135688] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.655 [2024-05-15 14:00:25.136170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.655 [2024-05-15 14:00:25.136268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.655 [2024-05-15 14:00:25.136531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.655 [2024-05-15 14:00:25.136533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.221 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:27.221 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:22:27.221 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.222 [2024-05-15 14:00:25.724702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.222 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 Malloc0 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 [2024-05-15 14:00:25.851237] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:27.480 [2024-05-15 14:00:25.851461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 [ 00:22:27.480 { 00:22:27.480 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:27.480 "subtype": "Discovery", 00:22:27.480 "listen_addresses": [ 00:22:27.480 { 00:22:27.480 "trtype": "TCP", 00:22:27.480 "adrfam": "IPv4", 00:22:27.480 "traddr": "10.0.0.2", 00:22:27.480 "trsvcid": "4420" 00:22:27.480 } 00:22:27.480 ], 00:22:27.480 "allow_any_host": true, 00:22:27.480 "hosts": [] 00:22:27.480 }, 00:22:27.480 { 00:22:27.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.480 "subtype": "NVMe", 00:22:27.480 "listen_addresses": [ 00:22:27.480 { 00:22:27.480 "trtype": "TCP", 00:22:27.480 "adrfam": "IPv4", 00:22:27.480 "traddr": "10.0.0.2", 00:22:27.480 "trsvcid": "4420" 00:22:27.480 } 00:22:27.480 ], 00:22:27.480 "allow_any_host": true, 00:22:27.480 "hosts": [], 00:22:27.480 "serial_number": "SPDK00000000000001", 00:22:27.480 "model_number": "SPDK bdev Controller", 00:22:27.480 "max_namespaces": 32, 00:22:27.480 "min_cntlid": 1, 00:22:27.480 "max_cntlid": 65519, 00:22:27.480 "namespaces": [ 00:22:27.480 { 00:22:27.480 "nsid": 1, 00:22:27.480 "bdev_name": "Malloc0", 00:22:27.480 "name": "Malloc0", 00:22:27.480 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:27.480 "eui64": "ABCDEF0123456789", 00:22:27.480 "uuid": "012deaaf-e857-4343-b159-4adc79b383fc" 00:22:27.480 } 00:22:27.480 ] 00:22:27.480 } 00:22:27.480 ] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 14:00:25 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:27.480 [2024-05-15 14:00:25.929914] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:27.480 [2024-05-15 14:00:25.929958] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:22:27.745 [2024-05-15 14:00:26.066260] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:27.745 [2024-05-15 14:00:26.066323] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:27.745 [2024-05-15 14:00:26.066329] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:27.745 [2024-05-15 14:00:26.066345] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:27.745 [2024-05-15 14:00:26.066357] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:22:27.745 [2024-05-15 14:00:26.066480] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:27.745 [2024-05-15 14:00:26.066520] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1efa280 0 00:22:27.745 [2024-05-15 14:00:26.073749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:27.745 [2024-05-15 14:00:26.073767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:27.745 [2024-05-15 14:00:26.073772] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:27.745 [2024-05-15 14:00:26.073776] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:27.745 [2024-05-15 14:00:26.073820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.745 [2024-05-15 14:00:26.073825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.745 [2024-05-15 14:00:26.073829] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.745 [2024-05-15 14:00:26.073841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:27.745 [2024-05-15 14:00:26.073866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.745 [2024-05-15 14:00:26.081749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.745 [2024-05-15 14:00:26.081763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.745 [2024-05-15 14:00:26.081767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.745 [2024-05-15 14:00:26.081772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.745 [2024-05-15 14:00:26.081782] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:27.745 [2024-05-15 14:00:26.081789] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:27.746 [2024-05-15 14:00:26.081795] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:27.746 [2024-05-15 14:00:26.081809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081813] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081817] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.081824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.081842] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.081889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.081895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.081899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081903] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.081909] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:27.746 [2024-05-15 14:00:26.081916] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:27.746 [2024-05-15 14:00:26.081923] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.081936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.081949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.081987] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.081992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.081996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.081999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082006] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:27.746 [2024-05-15 14:00:26.082013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.082045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082086] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082090] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.082135] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082189] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:27.746 [2024-05-15 14:00:26.082194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082306] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:27.746 [2024-05-15 14:00:26.082311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082322] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082326] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.082344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082392] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082401] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:27.746 [2024-05-15 14:00:26.082409] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082417] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.082434] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082479] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082488] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:27.746 [2024-05-15 14:00:26.082493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:27.746 [2024-05-15 14:00:26.082500] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:27.746 [2024-05-15 14:00:26.082515] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:27.746 [2024-05-15 14:00:26.082524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.746 [2024-05-15 14:00:26.082547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.746 [2024-05-15 14:00:26.082620] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.746 [2024-05-15 14:00:26.082624] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082628] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1efa280): datao=0, datal=4096, cccid=0 00:22:27.746 [2024-05-15 14:00:26.082632] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f42950) on tqpair(0x1efa280): expected_datao=0, payload_size=4096 00:22:27.746 [2024-05-15 14:00:26.082637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082644] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082648] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.746 [2024-05-15 14:00:26.082678] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:27.746 [2024-05-15 14:00:26.082684] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:27.746 [2024-05-15 14:00:26.082688] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:27.746 [2024-05-15 14:00:26.082694] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:27.746 [2024-05-15 14:00:26.082698] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:27.746 [2024-05-15 14:00:26.082703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:27.746 [2024-05-15 14:00:26.082711] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:27.746 [2024-05-15 14:00:26.082720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082724] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.746 [2024-05-15 14:00:26.082728] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.746 [2024-05-15 14:00:26.082744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.746 [2024-05-15 14:00:26.082758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.746 [2024-05-15 14:00:26.082799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.746 [2024-05-15 14:00:26.082805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.746 [2024-05-15 14:00:26.082808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42950) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.082820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.082833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.747 [2024-05-15 14:00:26.082839] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082846] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.082852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.747 [2024-05-15 14:00:26.082858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082865] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.082871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.747 [2024-05-15 14:00:26.082877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082880] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.082890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.747 [2024-05-15 14:00:26.082895] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:27.747 [2024-05-15 14:00:26.082906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:27.747 [2024-05-15 14:00:26.082912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.082916] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.082922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.747 [2024-05-15 14:00:26.082936] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42950, cid 0, qid 0 00:22:27.747 [2024-05-15 14:00:26.082941] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ab0, cid 1, qid 0 00:22:27.747 [2024-05-15 14:00:26.082945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42c10, cid 2, qid 0 00:22:27.747 [2024-05-15 14:00:26.082950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.747 [2024-05-15 14:00:26.082954] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ed0, cid 4, qid 0 00:22:27.747 [2024-05-15 14:00:26.083026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42ed0) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.083045] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:27.747 [2024-05-15 14:00:26.083051] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:27.747 [2024-05-15 14:00:26.083060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.083070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.747 [2024-05-15 14:00:26.083082] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ed0, cid 4, qid 0 00:22:27.747 [2024-05-15 14:00:26.083124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.747 [2024-05-15 14:00:26.083130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.747 [2024-05-15 14:00:26.083133] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083137] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1efa280): datao=0, datal=4096, cccid=4 00:22:27.747 [2024-05-15 14:00:26.083142] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f42ed0) on tqpair(0x1efa280): expected_datao=0, payload_size=4096 00:22:27.747 [2024-05-15 14:00:26.083146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083152] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083156] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083176] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42ed0) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.083187] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:27.747 [2024-05-15 14:00:26.083212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.083222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.747 [2024-05-15 14:00:26.083229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.083242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.747 [2024-05-15 14:00:26.083259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ed0, cid 4, qid 0 00:22:27.747 [2024-05-15 14:00:26.083264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f43030, cid 5, qid 0 00:22:27.747 [2024-05-15 14:00:26.083344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.747 [2024-05-15 14:00:26.083350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.747 [2024-05-15 14:00:26.083353] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083357] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1efa280): datao=0, datal=1024, cccid=4 00:22:27.747 [2024-05-15 14:00:26.083361] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f42ed0) on tqpair(0x1efa280): expected_datao=0, payload_size=1024 00:22:27.747 [2024-05-15 14:00:26.083366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083372] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083375] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f43030) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.083407] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083416] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42ed0) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.083435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.083445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.747 [2024-05-15 14:00:26.083461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ed0, cid 4, qid 0 00:22:27.747 [2024-05-15 14:00:26.083506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.747 [2024-05-15 14:00:26.083511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.747 [2024-05-15 14:00:26.083515] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083519] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1efa280): datao=0, datal=3072, cccid=4 00:22:27.747 [2024-05-15 14:00:26.083523] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f42ed0) on tqpair(0x1efa280): expected_datao=0, payload_size=3072 00:22:27.747 [2024-05-15 14:00:26.083528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083534] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083537] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083554] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42ed0) on tqpair=0x1efa280 00:22:27.747 [2024-05-15 14:00:26.083566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1efa280) 00:22:27.747 [2024-05-15 14:00:26.083575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.747 [2024-05-15 14:00:26.083591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42ed0, cid 4, qid 0 00:22:27.747 [2024-05-15 14:00:26.083636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.747 [2024-05-15 14:00:26.083642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.747 [2024-05-15 14:00:26.083645] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083649] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1efa280): datao=0, datal=8, cccid=4 00:22:27.747 [2024-05-15 14:00:26.083653] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f42ed0) on tqpair(0x1efa280): expected_datao=0, payload_size=8 00:22:27.747 [2024-05-15 14:00:26.083658] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083664] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083668] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.747 [2024-05-15 14:00:26.083679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.747 [2024-05-15 14:00:26.083685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.747 [2024-05-15 14:00:26.083688] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.748 [2024-05-15 14:00:26.083692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42ed0) on tqpair=0x1efa280 00:22:27.748 ===================================================== 00:22:27.748 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:27.748 ===================================================== 00:22:27.748 Controller Capabilities/Features 00:22:27.748 ================================ 00:22:27.748 Vendor ID: 0000 00:22:27.748 Subsystem Vendor ID: 0000 00:22:27.748 Serial Number: .................... 00:22:27.748 Model Number: ........................................ 00:22:27.748 Firmware Version: 24.05 00:22:27.748 Recommended Arb Burst: 0 00:22:27.748 IEEE OUI Identifier: 00 00 00 00:22:27.748 Multi-path I/O 00:22:27.748 May have multiple subsystem ports: No 00:22:27.748 May have multiple controllers: No 00:22:27.748 Associated with SR-IOV VF: No 00:22:27.748 Max Data Transfer Size: 131072 00:22:27.748 Max Number of Namespaces: 0 00:22:27.748 Max Number of I/O Queues: 1024 00:22:27.748 NVMe Specification Version (VS): 1.3 00:22:27.748 NVMe Specification Version (Identify): 1.3 00:22:27.748 Maximum Queue Entries: 128 00:22:27.748 Contiguous Queues Required: Yes 00:22:27.748 Arbitration Mechanisms Supported 00:22:27.748 Weighted Round Robin: Not Supported 00:22:27.748 Vendor Specific: Not Supported 00:22:27.748 Reset Timeout: 15000 ms 00:22:27.748 Doorbell Stride: 4 bytes 00:22:27.748 NVM Subsystem Reset: Not Supported 00:22:27.748 Command Sets Supported 00:22:27.748 NVM Command Set: Supported 00:22:27.748 Boot Partition: Not Supported 00:22:27.748 Memory Page Size Minimum: 4096 bytes 00:22:27.748 Memory Page Size Maximum: 4096 bytes 00:22:27.748 Persistent Memory Region: Not Supported 00:22:27.748 Optional Asynchronous Events Supported 00:22:27.748 Namespace Attribute Notices: Not Supported 00:22:27.748 Firmware Activation Notices: Not Supported 00:22:27.748 ANA Change Notices: Not Supported 00:22:27.748 PLE Aggregate Log Change Notices: Not Supported 00:22:27.748 LBA Status Info Alert Notices: Not Supported 00:22:27.748 EGE Aggregate Log Change Notices: Not Supported 00:22:27.748 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.748 Zone Descriptor Change Notices: Not Supported 00:22:27.748 Discovery Log Change Notices: Supported 00:22:27.748 Controller Attributes 00:22:27.748 128-bit Host Identifier: Not Supported 00:22:27.748 Non-Operational Permissive Mode: Not Supported 00:22:27.748 NVM Sets: Not Supported 00:22:27.748 Read Recovery Levels: Not Supported 00:22:27.748 Endurance Groups: Not Supported 00:22:27.748 Predictable Latency Mode: Not Supported 00:22:27.748 Traffic Based Keep ALive: Not Supported 00:22:27.748 Namespace Granularity: Not Supported 00:22:27.748 SQ Associations: Not Supported 00:22:27.748 UUID List: Not Supported 00:22:27.748 Multi-Domain Subsystem: Not Supported 00:22:27.748 Fixed Capacity Management: Not Supported 00:22:27.748 Variable Capacity Management: Not Supported 00:22:27.748 Delete Endurance Group: Not Supported 00:22:27.748 Delete NVM Set: Not Supported 00:22:27.748 Extended LBA Formats Supported: Not Supported 00:22:27.748 Flexible Data Placement Supported: Not Supported 00:22:27.748 00:22:27.748 Controller Memory Buffer Support 00:22:27.748 ================================ 00:22:27.748 Supported: No 00:22:27.748 00:22:27.748 Persistent Memory Region Support 00:22:27.748 ================================ 00:22:27.748 Supported: No 00:22:27.748 00:22:27.748 Admin Command Set Attributes 00:22:27.748 ============================ 00:22:27.748 Security Send/Receive: Not Supported 00:22:27.748 Format NVM: Not Supported 00:22:27.748 Firmware Activate/Download: Not Supported 00:22:27.748 Namespace Management: Not Supported 00:22:27.748 Device Self-Test: Not Supported 00:22:27.748 Directives: Not Supported 00:22:27.748 NVMe-MI: Not Supported 00:22:27.748 Virtualization Management: Not Supported 00:22:27.748 Doorbell Buffer Config: Not Supported 00:22:27.748 Get LBA Status Capability: Not Supported 00:22:27.748 Command & Feature Lockdown Capability: Not Supported 00:22:27.748 Abort Command Limit: 1 00:22:27.748 Async Event Request Limit: 4 00:22:27.748 Number of Firmware Slots: N/A 00:22:27.748 Firmware Slot 1 Read-Only: N/A 00:22:27.748 Firmware Activation Without Reset: N/A 00:22:27.748 Multiple Update Detection Support: N/A 00:22:27.748 Firmware Update Granularity: No Information Provided 00:22:27.748 Per-Namespace SMART Log: No 00:22:27.748 Asymmetric Namespace Access Log Page: Not Supported 00:22:27.748 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:27.748 Command Effects Log Page: Not Supported 00:22:27.748 Get Log Page Extended Data: Supported 00:22:27.748 Telemetry Log Pages: Not Supported 00:22:27.748 Persistent Event Log Pages: Not Supported 00:22:27.748 Supported Log Pages Log Page: May Support 00:22:27.748 Commands Supported & Effects Log Page: Not Supported 00:22:27.748 Feature Identifiers & Effects Log Page:May Support 00:22:27.748 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.748 Data Area 4 for Telemetry Log: Not Supported 00:22:27.748 Error Log Page Entries Supported: 128 00:22:27.748 Keep Alive: Not Supported 00:22:27.748 00:22:27.748 NVM Command Set Attributes 00:22:27.748 ========================== 00:22:27.748 Submission Queue Entry Size 00:22:27.748 Max: 1 00:22:27.748 Min: 1 00:22:27.748 Completion Queue Entry Size 00:22:27.748 Max: 1 00:22:27.748 Min: 1 00:22:27.748 Number of Namespaces: 0 00:22:27.748 Compare Command: Not Supported 00:22:27.748 Write Uncorrectable Command: Not Supported 00:22:27.748 Dataset Management Command: Not Supported 00:22:27.748 Write Zeroes Command: Not Supported 00:22:27.748 Set Features Save Field: Not Supported 00:22:27.748 Reservations: Not Supported 00:22:27.748 Timestamp: Not Supported 00:22:27.748 Copy: Not Supported 00:22:27.748 Volatile Write Cache: Not Present 00:22:27.748 Atomic Write Unit (Normal): 1 00:22:27.748 Atomic Write Unit (PFail): 1 00:22:27.748 Atomic Compare & Write Unit: 1 00:22:27.748 Fused Compare & Write: Supported 00:22:27.748 Scatter-Gather List 00:22:27.748 SGL Command Set: Supported 00:22:27.748 SGL Keyed: Supported 00:22:27.748 SGL Bit Bucket Descriptor: Not Supported 00:22:27.748 SGL Metadata Pointer: Not Supported 00:22:27.748 Oversized SGL: Not Supported 00:22:27.748 SGL Metadata Address: Not Supported 00:22:27.748 SGL Offset: Supported 00:22:27.748 Transport SGL Data Block: Not Supported 00:22:27.748 Replay Protected Memory Block: Not Supported 00:22:27.748 00:22:27.748 Firmware Slot Information 00:22:27.748 ========================= 00:22:27.748 Active slot: 0 00:22:27.748 00:22:27.748 00:22:27.748 Error Log 00:22:27.748 ========= 00:22:27.748 00:22:27.748 Active Namespaces 00:22:27.748 ================= 00:22:27.748 Discovery Log Page 00:22:27.748 ================== 00:22:27.748 Generation Counter: 2 00:22:27.748 Number of Records: 2 00:22:27.748 Record Format: 0 00:22:27.748 00:22:27.748 Discovery Log Entry 0 00:22:27.748 ---------------------- 00:22:27.748 Transport Type: 3 (TCP) 00:22:27.748 Address Family: 1 (IPv4) 00:22:27.748 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:27.748 Entry Flags: 00:22:27.748 Duplicate Returned Information: 1 00:22:27.748 Explicit Persistent Connection Support for Discovery: 1 00:22:27.748 Transport Requirements: 00:22:27.748 Secure Channel: Not Required 00:22:27.748 Port ID: 0 (0x0000) 00:22:27.748 Controller ID: 65535 (0xffff) 00:22:27.748 Admin Max SQ Size: 128 00:22:27.748 Transport Service Identifier: 4420 00:22:27.748 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:27.748 Transport Address: 10.0.0.2 00:22:27.748 Discovery Log Entry 1 00:22:27.748 ---------------------- 00:22:27.748 Transport Type: 3 (TCP) 00:22:27.748 Address Family: 1 (IPv4) 00:22:27.748 Subsystem Type: 2 (NVM Subsystem) 00:22:27.748 Entry Flags: 00:22:27.748 Duplicate Returned Information: 0 00:22:27.748 Explicit Persistent Connection Support for Discovery: 0 00:22:27.748 Transport Requirements: 00:22:27.748 Secure Channel: Not Required 00:22:27.748 Port ID: 0 (0x0000) 00:22:27.748 Controller ID: 65535 (0xffff) 00:22:27.748 Admin Max SQ Size: 128 00:22:27.748 Transport Service Identifier: 4420 00:22:27.748 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:27.748 Transport Address: 10.0.0.2 [2024-05-15 14:00:26.083785] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:27.748 [2024-05-15 14:00:26.083797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.748 [2024-05-15 14:00:26.083804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.748 [2024-05-15 14:00:26.083810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.748 [2024-05-15 14:00:26.083816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.748 [2024-05-15 14:00:26.083824] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.083828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.083832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.083838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.083852] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.083890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.083896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.083899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.083903] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.083911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.083914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.083918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.083924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.083939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.083988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.083993] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.083997] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084001] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084006] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:27.749 [2024-05-15 14:00:26.084011] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:27.749 [2024-05-15 14:00:26.084019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084091] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084094] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084176] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084185] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084273] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084339] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084355] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084360] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084364] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084428] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084445] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084531] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084610] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084617] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084635] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084681] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084694] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084766] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084772] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084775] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084793] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.749 [2024-05-15 14:00:26.084802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.749 [2024-05-15 14:00:26.084815] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.749 [2024-05-15 14:00:26.084859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.749 [2024-05-15 14:00:26.084864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.749 [2024-05-15 14:00:26.084868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.749 [2024-05-15 14:00:26.084872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.749 [2024-05-15 14:00:26.084881] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.084885] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.084888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.084894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.084906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.084945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.084950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.084954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.084958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.084967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.084971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.084975] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.084981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.084992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085042] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085081] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085133] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085168] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085233] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085285] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085290] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085294] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085297] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085311] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085314] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085332] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085378] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085455] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085464] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085468] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085492] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085552] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085559] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085594] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.750 [2024-05-15 14:00:26.085653] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.750 [2024-05-15 14:00:26.085660] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.750 [2024-05-15 14:00:26.085666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.750 [2024-05-15 14:00:26.085678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.750 [2024-05-15 14:00:26.085712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.750 [2024-05-15 14:00:26.085717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.750 [2024-05-15 14:00:26.085721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.085724] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.751 [2024-05-15 14:00:26.089743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.089757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.089761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1efa280) 00:22:27.751 [2024-05-15 14:00:26.089768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.089787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f42d70, cid 3, qid 0 00:22:27.751 [2024-05-15 14:00:26.089823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.089829] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.089832] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.089836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f42d70) on tqpair=0x1efa280 00:22:27.751 [2024-05-15 14:00:26.089845] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:27.751 00:22:27.751 14:00:26 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:27.751 [2024-05-15 14:00:26.134015] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:27.751 [2024-05-15 14:00:26.134059] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73529 ] 00:22:27.751 [2024-05-15 14:00:26.269283] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:27.751 [2024-05-15 14:00:26.269347] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:27.751 [2024-05-15 14:00:26.269353] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:27.751 [2024-05-15 14:00:26.269368] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:27.751 [2024-05-15 14:00:26.269380] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:22:27.751 [2024-05-15 14:00:26.269513] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:27.751 [2024-05-15 14:00:26.269553] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c8f280 0 00:22:27.751 [2024-05-15 14:00:26.276755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:27.751 [2024-05-15 14:00:26.276771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:27.751 [2024-05-15 14:00:26.276776] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:27.751 [2024-05-15 14:00:26.276781] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:27.751 [2024-05-15 14:00:26.276827] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.276832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.276836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.276848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:27.751 [2024-05-15 14:00:26.276870] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.284752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.284775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.284780] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.284801] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:27.751 [2024-05-15 14:00:26.284810] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:27.751 [2024-05-15 14:00:26.284816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:27.751 [2024-05-15 14:00:26.284832] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284837] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.284851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.284877] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.284932] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.284938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.284941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284945] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.284951] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:27.751 [2024-05-15 14:00:26.284958] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:27.751 [2024-05-15 14:00:26.284965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.284972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.284979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.284992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.285028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.285034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.285038] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285042] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.285048] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:27.751 [2024-05-15 14:00:26.285056] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.285076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.285088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.285125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.285131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.285135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.285144] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285157] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.285166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.285179] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.285218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.285226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.285230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285234] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.285240] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:27.751 [2024-05-15 14:00:26.285245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285252] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285357] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:27.751 [2024-05-15 14:00:26.285364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.285386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.285399] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.285435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.285441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.285445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.751 [2024-05-15 14:00:26.285454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:27.751 [2024-05-15 14:00:26.285463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.751 [2024-05-15 14:00:26.285470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.751 [2024-05-15 14:00:26.285476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.751 [2024-05-15 14:00:26.285497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.751 [2024-05-15 14:00:26.285533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.751 [2024-05-15 14:00:26.285542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.751 [2024-05-15 14:00:26.285545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285549] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.752 [2024-05-15 14:00:26.285555] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:27.752 [2024-05-15 14:00:26.285560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:27.752 [2024-05-15 14:00:26.285567] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:27.752 [2024-05-15 14:00:26.285584] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:27.752 [2024-05-15 14:00:26.285594] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.752 [2024-05-15 14:00:26.285604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.752 [2024-05-15 14:00:26.285617] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.752 [2024-05-15 14:00:26.285689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.752 [2024-05-15 14:00:26.285697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.752 [2024-05-15 14:00:26.285701] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285705] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=4096, cccid=0 00:22:27.752 [2024-05-15 14:00:26.285710] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd7950) on tqpair(0x1c8f280): expected_datao=0, payload_size=4096 00:22:27.752 [2024-05-15 14:00:26.285716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285723] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285727] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.752 [2024-05-15 14:00:26.285749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.752 [2024-05-15 14:00:26.285752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.752 [2024-05-15 14:00:26.285765] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:27.752 [2024-05-15 14:00:26.285771] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:27.752 [2024-05-15 14:00:26.285776] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:27.752 [2024-05-15 14:00:26.285780] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:27.752 [2024-05-15 14:00:26.285785] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:27.752 [2024-05-15 14:00:26.285790] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:27.752 [2024-05-15 14:00:26.285799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:27.752 [2024-05-15 14:00:26.285809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285813] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.752 [2024-05-15 14:00:26.285816] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.752 [2024-05-15 14:00:26.285823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.752 [2024-05-15 14:00:26.285837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.752 ===================================================== 00:22:27.752 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.752 ===================================================== 00:22:27.752 Controller Capabilities/Features 00:22:27.752 ================================ 00:22:27.752 Vendor ID: 8086 00:22:27.752 Subsystem Vendor ID: 8086 00:22:27.752 Serial Number: SPDK00000000000001 00:22:27.752 Model Number: SPDK bdev Controller 00:22:27.752 Firmware Version: 24.05 00:22:27.752 Recommended Arb Burst: 6 00:22:27.752 IEEE OUI Identifier: e4 d2 5c 00:22:27.752 Multi-path I/O 00:22:27.752 May have multiple subsystem ports: Yes 00:22:27.752 May have multiple controllers: Yes 00:22:27.752 Associated with SR-IOV VF: No 00:22:27.752 Max Data Transfer Size: 131072 00:22:27.752 Max Number of Namespaces: 32 00:22:27.752 Max Number of I/O Queues: 127 00:22:27.752 NVMe Specification Version (VS): 1.3 00:22:27.752 NVMe Specification Version (Identify): 1.3 00:22:27.752 Maximum Queue Entries: 128 00:22:27.752 Contiguous Queues Required: Yes 00:22:27.752 Arbitration Mechanisms Supported 00:22:27.752 Weighted Round Robin: Not Supported 00:22:27.752 Vendor Specific: Not Supported 00:22:27.752 Reset Timeout: 15000 ms 00:22:27.752 Doorbell Stride: 4 bytes 00:22:27.752 NVM Subsystem Reset: Not Supported 00:22:27.752 Command Sets Supported 00:22:27.752 NVM Command Set: Supported 00:22:27.752 Boot Partition: Not Supported 00:22:27.752 Memory Page Size Minimum: 4096 bytes 00:22:27.752 Memory Page Size Maximum: 4096 bytes 00:22:27.752 Persistent Memory Region: Not Supported 00:22:27.752 Optional Asynchronous Events Supported 00:22:27.752 Namespace Attribute Notices: Supported 00:22:27.752 Firmware Activation Notices: Not Supported 00:22:27.752 ANA Change Notices: Not Supported 00:22:27.752 PLE Aggregate Log Change Notices: Not Supported 00:22:27.752 LBA Status Info Alert Notices: Not Supported 00:22:27.752 EGE Aggregate Log Change Notices: Not Supported 00:22:27.752 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.752 Zone Descriptor Change Notices: Not Supported 00:22:27.752 Discovery Log Change Notices: Not Supported 00:22:27.752 Controller Attributes 00:22:27.752 128-bit Host Identifier: Supported 00:22:27.752 Non-Operational Permissive Mode: Not Supported 00:22:27.752 NVM Sets: Not Supported 00:22:27.752 Read Recovery Levels: Not Supported 00:22:27.752 Endurance Groups: Not Supported 00:22:27.752 Predictable Latency Mode: Not Supported 00:22:27.752 Traffic Based Keep ALive: Not Supported 00:22:27.752 Namespace Granularity: Not Supported 00:22:27.752 SQ Associations: Not Supported 00:22:27.752 UUID List: Not Supported 00:22:27.752 Multi-Domain Subsystem: Not Supported 00:22:27.752 Fixed Capacity Management: Not Supported 00:22:27.752 Variable Capacity Management: Not Supported 00:22:27.752 Delete Endurance Group: Not Supported 00:22:27.752 Delete NVM Set: Not Supported 00:22:27.752 Extended LBA Formats Supported: Not Supported 00:22:27.752 Flexible Data Placement Supported: Not Supported 00:22:27.752 00:22:27.752 Controller Memory Buffer Support 00:22:27.752 ================================ 00:22:27.752 Supported: No 00:22:27.752 00:22:27.752 Persistent Memory Region Support 00:22:27.752 ================================ 00:22:27.752 Supported: No 00:22:27.752 00:22:27.752 Admin Command Set Attributes 00:22:27.752 ============================ 00:22:27.752 Security Send/Receive: Not Supported 00:22:27.752 Format NVM: Not Supported 00:22:27.752 Firmware Activate/Download: Not Supported 00:22:27.752 Namespace Management: Not Supported 00:22:27.752 Device Self-Test: Not Supported 00:22:27.752 Directives: Not Supported 00:22:27.752 NVMe-MI: Not Supported 00:22:27.752 Virtualization Management: Not Supported 00:22:27.752 Doorbell Buffer Config: Not Supported 00:22:27.752 Get LBA Status Capability: Not Supported 00:22:27.752 Command & Feature Lockdown Capability: Not Supported 00:22:27.752 Abort Command Limit: 4 00:22:27.752 Async Event Request Limit: 4 00:22:27.752 Number of Firmware Slots: N/A 00:22:27.752 Firmware Slot 1 Read-Only: N/A 00:22:27.752 Firmware Activation Without Reset: N/A 00:22:27.752 Multiple Update Detection Support: N/A 00:22:27.752 Firmware Update Granularity: No Information Provided 00:22:27.752 Per-Namespace SMART Log: No 00:22:27.752 Asymmetric Namespace Access Log Page: Not Supported 00:22:27.752 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:27.752 Command Effects Log Page: Supported 00:22:27.752 Get Log Page Extended Data: Supported 00:22:27.752 Telemetry Log Pages: Not Supported 00:22:27.752 Persistent Event Log Pages: Not Supported 00:22:27.752 Supported Log Pages Log Page: May Support 00:22:27.752 Commands Supported & Effects Log Page: Not Supported 00:22:27.752 Feature Identifiers & Effects Log Page:May Support 00:22:27.752 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.752 Data Area 4 for Telemetry Log: Not Supported 00:22:27.752 Error Log Page Entries Supported: 128 00:22:27.752 Keep Alive: Supported 00:22:27.752 Keep Alive Granularity: 10000 ms 00:22:27.752 00:22:27.752 NVM Command Set Attributes 00:22:27.752 ========================== 00:22:27.752 Submission Queue Entry Size 00:22:27.752 Max: 64 00:22:27.752 Min: 64 00:22:27.752 Completion Queue Entry Size 00:22:27.752 Max: 16 00:22:27.752 Min: 16 00:22:27.752 Number of Namespaces: 32 00:22:27.752 Compare Command: Supported 00:22:27.752 Write Uncorrectable Command: Not Supported 00:22:27.752 Dataset Management Command: Supported 00:22:27.752 Write Zeroes Command: Supported 00:22:27.752 Set Features Save Field: Not Supported 00:22:27.752 Reservations: Supported 00:22:27.752 Timestamp: Not Supported 00:22:27.752 Copy: Supported 00:22:27.752 Volatile Write Cache: Present 00:22:27.752 Atomic Write Unit (Normal): 1 00:22:27.752 Atomic Write Unit (PFail): 1 00:22:27.752 Atomic Compare & Write Unit: 1 00:22:27.752 Fused Compare & Write: Supported 00:22:27.752 Scatter-Gather List 00:22:27.752 SGL Command Set: Supported 00:22:27.752 SGL Keyed: Supported 00:22:27.752 SGL Bit Bucket Descriptor: Not Supported 00:22:27.752 SGL Metadata Pointer: Not Supported 00:22:27.752 Oversized SGL: Not Supported 00:22:27.753 SGL Metadata Address: Not Supported 00:22:27.753 SGL Offset: Supported 00:22:27.753 Transport SGL Data Block: Not Supported 00:22:27.753 Replay Protected Memory Block: Not Supported 00:22:27.753 00:22:27.753 Firmware Slot Information 00:22:27.753 ========================= 00:22:27.753 Active slot: 1 00:22:27.753 Slot 1 Firmware Revision: 24.05 00:22:27.753 00:22:27.753 00:22:27.753 Commands Supported and Effects 00:22:27.753 ============================== 00:22:27.753 Admin Commands 00:22:27.753 -------------- 00:22:27.753 Get Log Page (02h): Supported 00:22:27.753 Identify (06h): Supported 00:22:27.753 Abort (08h): Supported 00:22:27.753 Set Features (09h): Supported 00:22:27.753 Get Features (0Ah): Supported 00:22:27.753 Asynchronous Event Request (0Ch): Supported 00:22:27.753 Keep Alive (18h): Supported 00:22:27.753 I/O Commands 00:22:27.753 ------------ 00:22:27.753 Flush (00h): Supported LBA-Change 00:22:27.753 Write (01h): Supported LBA-Change 00:22:27.753 Read (02h): Supported 00:22:27.753 Compare (05h): Supported 00:22:27.753 Write Zeroes (08h): Supported LBA-Change 00:22:27.753 Dataset Management (09h): Supported LBA-Change 00:22:27.753 Copy (19h): Supported LBA-Change 00:22:27.753 Unknown (79h): Supported LBA-Change 00:22:27.753 Unknown (7Ah): Supported 00:22:27.753 00:22:27.753 Error Log 00:22:27.753 ========= 00:22:27.753 00:22:27.753 Arbitration 00:22:27.753 =========== 00:22:27.753 Arbitration Burst: 1 00:22:27.753 00:22:27.753 Power Management 00:22:27.753 ================ 00:22:27.753 Number of Power States: 1 00:22:27.753 Current Power State: Power State #0 00:22:27.753 Power State #0: 00:22:27.753 Max Power: 0.00 W 00:22:27.753 Non-Operational State: Operational 00:22:27.753 Entry Latency: Not Reported 00:22:27.753 Exit Latency: Not Reported 00:22:27.753 Relative Read Throughput: 0 00:22:27.753 Relative Read Latency: 0 00:22:27.753 Relative Write Throughput: 0 00:22:27.753 Relative Write Latency: 0 00:22:27.753 Idle Power: Not Reported 00:22:27.753 Active Power: Not Reported 00:22:27.753 Non-Operational Permissive Mode: Not Supported 00:22:27.753 00:22:27.753 Health Information 00:22:27.753 ================== 00:22:27.753 Critical Warnings: 00:22:27.753 Available Spare Space: OK 00:22:27.753 Temperature: OK 00:22:27.753 Device Reliability: OK 00:22:27.753 Read Only: No 00:22:27.753 Volatile Memory Backup: OK 00:22:27.753 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:27.753 Temperature Threshold: [2024-05-15 14:00:26.285881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.753 [2024-05-15 14:00:26.285890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.753 [2024-05-15 14:00:26.285893] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285897] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7950) on tqpair=0x1c8f280 00:22:27.753 [2024-05-15 14:00:26.285905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285913] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.285918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.753 [2024-05-15 14:00:26.285925] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.285938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.753 [2024-05-15 14:00:26.285944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.285957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.753 [2024-05-15 14:00:26.285963] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.285971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.285976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.753 [2024-05-15 14:00:26.285981] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.285992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.285998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.286008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.753 [2024-05-15 14:00:26.286023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7950, cid 0, qid 0 00:22:27.753 [2024-05-15 14:00:26.286028] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ab0, cid 1, qid 0 00:22:27.753 [2024-05-15 14:00:26.286033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7c10, cid 2, qid 0 00:22:27.753 [2024-05-15 14:00:26.286037] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.753 [2024-05-15 14:00:26.286042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.753 [2024-05-15 14:00:26.286112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.753 [2024-05-15 14:00:26.286117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.753 [2024-05-15 14:00:26.286121] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.753 [2024-05-15 14:00:26.286131] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:27.753 [2024-05-15 14:00:26.286137] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286149] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286156] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286162] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.286176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.753 [2024-05-15 14:00:26.286188] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.753 [2024-05-15 14:00:26.286225] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.753 [2024-05-15 14:00:26.286230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.753 [2024-05-15 14:00:26.286234] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.753 [2024-05-15 14:00:26.286281] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286290] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.286307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.753 [2024-05-15 14:00:26.286320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.753 [2024-05-15 14:00:26.286362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.753 [2024-05-15 14:00:26.286367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.753 [2024-05-15 14:00:26.286371] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286375] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=4096, cccid=4 00:22:27.753 [2024-05-15 14:00:26.286380] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd7ed0) on tqpair(0x1c8f280): expected_datao=0, payload_size=4096 00:22:27.753 [2024-05-15 14:00:26.286384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286391] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286395] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.753 [2024-05-15 14:00:26.286407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.753 [2024-05-15 14:00:26.286411] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286415] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.753 [2024-05-15 14:00:26.286428] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:27.753 [2024-05-15 14:00:26.286443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286452] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:27.753 [2024-05-15 14:00:26.286459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.753 [2024-05-15 14:00:26.286463] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.753 [2024-05-15 14:00:26.286469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.753 [2024-05-15 14:00:26.286482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.753 [2024-05-15 14:00:26.286538] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.753 [2024-05-15 14:00:26.286544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.754 [2024-05-15 14:00:26.286547] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286551] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=4096, cccid=4 00:22:27.754 [2024-05-15 14:00:26.286556] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd7ed0) on tqpair(0x1c8f280): expected_datao=0, payload_size=4096 00:22:27.754 [2024-05-15 14:00:26.286561] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286567] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286570] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.286583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.286587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286591] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.286604] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286613] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286620] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286624] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.286630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.286643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.754 [2024-05-15 14:00:26.286692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.754 [2024-05-15 14:00:26.286698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.754 [2024-05-15 14:00:26.286702] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286705] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=4096, cccid=4 00:22:27.754 [2024-05-15 14:00:26.286710] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd7ed0) on tqpair(0x1c8f280): expected_datao=0, payload_size=4096 00:22:27.754 [2024-05-15 14:00:26.286715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286721] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286724] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.286748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.286752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.286765] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286789] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286794] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286800] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:27.754 [2024-05-15 14:00:26.286805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:27.754 [2024-05-15 14:00:26.286811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:27.754 [2024-05-15 14:00:26.286832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286837] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.286843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.286849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286857] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.286863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.754 [2024-05-15 14:00:26.286881] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.754 [2024-05-15 14:00:26.286886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8030, cid 5, qid 0 00:22:27.754 [2024-05-15 14:00:26.286936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.286942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.286946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286950] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.286957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.286962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.286966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8030) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.286980] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.286984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.286990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8030, cid 5, qid 0 00:22:27.754 [2024-05-15 14:00:26.287034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.287040] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.287044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8030) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.287059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.287069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287081] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8030, cid 5, qid 0 00:22:27.754 [2024-05-15 14:00:26.287117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.287123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.287127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8030) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.287140] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.287150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8030, cid 5, qid 0 00:22:27.754 [2024-05-15 14:00:26.287202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.754 [2024-05-15 14:00:26.287208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.754 [2024-05-15 14:00:26.287211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8030) on tqpair=0x1c8f280 00:22:27.754 [2024-05-15 14:00:26.287228] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.287238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.287254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.754 [2024-05-15 14:00:26.287264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c8f280) 00:22:27.754 [2024-05-15 14:00:26.287270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.754 [2024-05-15 14:00:26.287278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.287287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.287300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8030, cid 5, qid 0 00:22:27.755 [2024-05-15 14:00:26.287306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7ed0, cid 4, qid 0 00:22:27.755 [2024-05-15 14:00:26.287310] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd8190, cid 6, qid 0 00:22:27.755 [2024-05-15 14:00:26.287315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd82f0, cid 7, qid 0 00:22:27.755 [2024-05-15 14:00:26.287416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.755 [2024-05-15 14:00:26.287421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.755 [2024-05-15 14:00:26.287425] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287428] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=8192, cccid=5 00:22:27.755 [2024-05-15 14:00:26.287433] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd8030) on tqpair(0x1c8f280): expected_datao=0, payload_size=8192 00:22:27.755 [2024-05-15 14:00:26.287438] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287452] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287456] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.755 [2024-05-15 14:00:26.287467] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.755 [2024-05-15 14:00:26.287470] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287474] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=512, cccid=4 00:22:27.755 [2024-05-15 14:00:26.287479] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd7ed0) on tqpair(0x1c8f280): expected_datao=0, payload_size=512 00:22:27.755 [2024-05-15 14:00:26.287484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287489] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287493] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.755 [2024-05-15 14:00:26.287503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.755 [2024-05-15 14:00:26.287507] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287511] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=512, cccid=6 00:22:27.755 [2024-05-15 14:00:26.287515] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd8190) on tqpair(0x1c8f280): expected_datao=0, payload_size=512 00:22:27.755 [2024-05-15 14:00:26.287520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287526] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287529] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.755 [2024-05-15 14:00:26.287540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.755 [2024-05-15 14:00:26.287544] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287547] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f280): datao=0, datal=4096, cccid=7 00:22:27.755 [2024-05-15 14:00:26.287552] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cd82f0) on tqpair(0x1c8f280): expected_datao=0, payload_size=4096 00:22:27.755 [2024-05-15 14:00:26.287557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287563] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287567] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287574] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287583] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287587] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8030) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.287601] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7ed0) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.287625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287631] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287635] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd8190) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.287649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd82f0) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.287764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287770] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.287776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.287791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd82f0, cid 7, qid 0 00:22:27.755 [2024-05-15 14:00:26.287836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd82f0) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.287884] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:27.755 [2024-05-15 14:00:26.287895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.755 [2024-05-15 14:00:26.287902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.755 [2024-05-15 14:00:26.287908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.755 [2024-05-15 14:00:26.287914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.755 [2024-05-15 14:00:26.287922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287929] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.287936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.287950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.755 [2024-05-15 14:00:26.287984] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.287990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.287993] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.287997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.288005] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288013] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.288019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.288034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.755 [2024-05-15 14:00:26.288089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.288095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.288098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.288108] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:27.755 [2024-05-15 14:00:26.288113] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:27.755 [2024-05-15 14:00:26.288121] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288125] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.288135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.288147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.755 [2024-05-15 14:00:26.288191] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.288196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.288200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288204] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.288214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.755 [2024-05-15 14:00:26.288227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.755 [2024-05-15 14:00:26.288240] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.755 [2024-05-15 14:00:26.288281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.755 [2024-05-15 14:00:26.288286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.755 [2024-05-15 14:00:26.288290] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288294] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.755 [2024-05-15 14:00:26.288303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288307] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.755 [2024-05-15 14:00:26.288311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.288317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.288329] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.288370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.288375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.288379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.288393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288401] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.288407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.288419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.288453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.288458] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.288462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.288475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.288489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.288501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.288542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.288547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.288551] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.288564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.288578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.288590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.288626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.288631] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.288635] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.288648] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.288662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.288674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.288715] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.288720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.288724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.288729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.292781] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.292789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.292793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f280) 00:22:27.756 [2024-05-15 14:00:26.292801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.756 [2024-05-15 14:00:26.292821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cd7d70, cid 3, qid 0 00:22:27.756 [2024-05-15 14:00:26.292880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.756 [2024-05-15 14:00:26.292886] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.756 [2024-05-15 14:00:26.292890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.756 [2024-05-15 14:00:26.292894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cd7d70) on tqpair=0x1c8f280 00:22:27.756 [2024-05-15 14:00:26.292902] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:22:28.024 0 Kelvin (-273 Celsius) 00:22:28.024 Available Spare: 0% 00:22:28.024 Available Spare Threshold: 0% 00:22:28.024 Life Percentage Used: 0% 00:22:28.024 Data Units Read: 0 00:22:28.024 Data Units Written: 0 00:22:28.024 Host Read Commands: 0 00:22:28.024 Host Write Commands: 0 00:22:28.024 Controller Busy Time: 0 minutes 00:22:28.024 Power Cycles: 0 00:22:28.024 Power On Hours: 0 hours 00:22:28.024 Unsafe Shutdowns: 0 00:22:28.024 Unrecoverable Media Errors: 0 00:22:28.024 Lifetime Error Log Entries: 0 00:22:28.024 Warning Temperature Time: 0 minutes 00:22:28.024 Critical Temperature Time: 0 minutes 00:22:28.024 00:22:28.024 Number of Queues 00:22:28.024 ================ 00:22:28.024 Number of I/O Submission Queues: 127 00:22:28.024 Number of I/O Completion Queues: 127 00:22:28.024 00:22:28.024 Active Namespaces 00:22:28.024 ================= 00:22:28.024 Namespace ID:1 00:22:28.024 Error Recovery Timeout: Unlimited 00:22:28.024 Command Set Identifier: NVM (00h) 00:22:28.024 Deallocate: Supported 00:22:28.024 Deallocated/Unwritten Error: Not Supported 00:22:28.024 Deallocated Read Value: Unknown 00:22:28.024 Deallocate in Write Zeroes: Not Supported 00:22:28.024 Deallocated Guard Field: 0xFFFF 00:22:28.024 Flush: Supported 00:22:28.024 Reservation: Supported 00:22:28.024 Namespace Sharing Capabilities: Multiple Controllers 00:22:28.024 Size (in LBAs): 131072 (0GiB) 00:22:28.024 Capacity (in LBAs): 131072 (0GiB) 00:22:28.024 Utilization (in LBAs): 131072 (0GiB) 00:22:28.024 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:28.024 EUI64: ABCDEF0123456789 00:22:28.024 UUID: 012deaaf-e857-4343-b159-4adc79b383fc 00:22:28.024 Thin Provisioning: Not Supported 00:22:28.024 Per-NS Atomic Units: Yes 00:22:28.024 Atomic Boundary Size (Normal): 0 00:22:28.024 Atomic Boundary Size (PFail): 0 00:22:28.024 Atomic Boundary Offset: 0 00:22:28.024 Maximum Single Source Range Length: 65535 00:22:28.024 Maximum Copy Length: 65535 00:22:28.024 Maximum Source Range Count: 1 00:22:28.024 NGUID/EUI64 Never Reused: No 00:22:28.024 Namespace Write Protected: No 00:22:28.024 Number of LBA Formats: 1 00:22:28.024 Current LBA Format: LBA Format #00 00:22:28.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:28.024 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.024 rmmod nvme_tcp 00:22:28.024 rmmod nvme_fabrics 00:22:28.024 rmmod nvme_keyring 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73491 ']' 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73491 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 73491 ']' 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 73491 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73491 00:22:28.024 killing process with pid 73491 00:22:28.024 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:28.025 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:28.025 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73491' 00:22:28.025 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 73491 00:22:28.025 [2024-05-15 14:00:26.455637] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:28.025 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 73491 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:28.284 00:22:28.284 real 0m2.523s 00:22:28.284 user 0m6.337s 00:22:28.284 sys 0m0.765s 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.284 14:00:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.284 ************************************ 00:22:28.284 END TEST nvmf_identify 00:22:28.284 ************************************ 00:22:28.284 14:00:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:28.284 14:00:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:28.284 14:00:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.284 14:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.284 ************************************ 00:22:28.284 START TEST nvmf_perf 00:22:28.284 ************************************ 00:22:28.284 14:00:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:28.545 * Looking for test storage... 00:22:28.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.545 14:00:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:28.545 Cannot find device "nvmf_tgt_br" 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.545 Cannot find device "nvmf_tgt_br2" 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:28.545 Cannot find device "nvmf_tgt_br" 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:28.545 Cannot find device "nvmf_tgt_br2" 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:22:28.545 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:28.804 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:29.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:22:29.063 00:22:29.063 --- 10.0.0.2 ping statistics --- 00:22:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.064 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:29.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:22:29.064 00:22:29.064 --- 10.0.0.3 ping statistics --- 00:22:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.064 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:29.064 00:22:29.064 --- 10.0.0.1 ping statistics --- 00:22:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.064 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=73701 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 73701 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 73701 ']' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:29.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:29.064 14:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:29.064 [2024-05-15 14:00:27.465953] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:29.064 [2024-05-15 14:00:27.466050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.064 [2024-05-15 14:00:27.604048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.322 [2024-05-15 14:00:27.703366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.322 [2024-05-15 14:00:27.703411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.322 [2024-05-15 14:00:27.703421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.322 [2024-05-15 14:00:27.703429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.322 [2024-05-15 14:00:27.703436] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.322 [2024-05-15 14:00:27.703644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.322 [2024-05-15 14:00:27.703839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.322 [2024-05-15 14:00:27.704663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.322 [2024-05-15 14:00:27.704663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:29.889 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:30.457 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:30.457 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:30.457 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:30.457 14:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:30.734 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:30.734 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:30.734 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:30.734 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:30.734 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.005 [2024-05-15 14:00:29.322174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.005 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.005 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:31.005 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.264 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:31.264 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:31.523 14:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.523 [2024-05-15 14:00:30.065710] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:31.523 [2024-05-15 14:00:30.065996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.783 14:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.783 14:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:31.783 14:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:31.783 14:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:31.783 14:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:33.161 Initializing NVMe Controllers 00:22:33.161 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:33.161 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:33.161 Initialization complete. Launching workers. 00:22:33.161 ======================================================== 00:22:33.161 Latency(us) 00:22:33.161 Device Information : IOPS MiB/s Average min max 00:22:33.161 PCIE (0000:00:10.0) NSID 1 from core 0: 19233.00 75.13 1663.71 619.90 7007.16 00:22:33.161 ======================================================== 00:22:33.161 Total : 19233.00 75.13 1663.71 619.90 7007.16 00:22:33.161 00:22:33.161 14:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.098 Initializing NVMe Controllers 00:22:34.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.099 Initialization complete. Launching workers. 00:22:34.099 ======================================================== 00:22:34.099 Latency(us) 00:22:34.099 Device Information : IOPS MiB/s Average min max 00:22:34.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4463.07 17.43 223.85 83.05 4250.60 00:22:34.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.92 0.48 8133.71 7052.07 12038.14 00:22:34.099 ======================================================== 00:22:34.099 Total : 4586.99 17.92 437.54 83.05 12038.14 00:22:34.099 00:22:34.358 14:00:32 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.734 Initializing NVMe Controllers 00:22:35.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.734 Initialization complete. Launching workers. 00:22:35.734 ======================================================== 00:22:35.734 Latency(us) 00:22:35.734 Device Information : IOPS MiB/s Average min max 00:22:35.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11052.29 43.17 2896.34 456.54 6772.18 00:22:35.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.82 15.63 8029.38 4333.97 12859.51 00:22:35.734 ======================================================== 00:22:35.734 Total : 15053.11 58.80 4260.60 456.54 12859.51 00:22:35.734 00:22:35.734 14:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:35.734 14:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.329 Initializing NVMe Controllers 00:22:38.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.329 Controller IO queue size 128, less than required. 00:22:38.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.329 Controller IO queue size 128, less than required. 00:22:38.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.329 Initialization complete. Launching workers. 00:22:38.329 ======================================================== 00:22:38.329 Latency(us) 00:22:38.329 Device Information : IOPS MiB/s Average min max 00:22:38.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1748.36 437.09 75185.41 32512.31 160252.90 00:22:38.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 673.95 168.49 199263.33 94254.66 314500.96 00:22:38.329 ======================================================== 00:22:38.329 Total : 2422.31 605.58 109706.99 32512.31 314500.96 00:22:38.329 00:22:38.329 14:00:36 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:38.589 Initializing NVMe Controllers 00:22:38.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.589 Controller IO queue size 128, less than required. 00:22:38.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:38.589 Controller IO queue size 128, less than required. 00:22:38.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:22:38.589 WARNING: Some requested NVMe devices were skipped 00:22:38.589 No valid NVMe controllers or AIO or URING devices found 00:22:38.589 14:00:36 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:41.140 Initializing NVMe Controllers 00:22:41.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.140 Controller IO queue size 128, less than required. 00:22:41.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.140 Controller IO queue size 128, less than required. 00:22:41.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:41.140 Initialization complete. Launching workers. 00:22:41.140 00:22:41.140 ==================== 00:22:41.140 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:41.140 TCP transport: 00:22:41.140 polls: 5290 00:22:41.140 idle_polls: 0 00:22:41.140 sock_completions: 5290 00:22:41.140 nvme_completions: 4881 00:22:41.140 submitted_requests: 7274 00:22:41.140 queued_requests: 1 00:22:41.140 00:22:41.140 ==================== 00:22:41.140 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:41.140 TCP transport: 00:22:41.140 polls: 5341 00:22:41.140 idle_polls: 0 00:22:41.140 sock_completions: 5341 00:22:41.140 nvme_completions: 5781 00:22:41.140 submitted_requests: 8696 00:22:41.140 queued_requests: 1 00:22:41.140 ======================================================== 00:22:41.140 Latency(us) 00:22:41.140 Device Information : IOPS MiB/s Average min max 00:22:41.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1218.51 304.63 106769.17 52146.20 186051.28 00:22:41.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1443.23 360.81 89442.11 43277.15 159859.66 00:22:41.140 ======================================================== 00:22:41.140 Total : 2661.74 665.43 97374.20 43277.15 186051.28 00:22:41.140 00:22:41.140 14:00:39 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:41.140 14:00:39 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.401 rmmod nvme_tcp 00:22:41.401 rmmod nvme_fabrics 00:22:41.401 rmmod nvme_keyring 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 73701 ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 73701 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 73701 ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 73701 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73701 00:22:41.401 killing process with pid 73701 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73701' 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 73701 00:22:41.401 [2024-05-15 14:00:39.856856] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:41.401 14:00:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 73701 00:22:42.337 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:42.338 00:22:42.338 real 0m13.916s 00:22:42.338 user 0m48.872s 00:22:42.338 sys 0m4.747s 00:22:42.338 ************************************ 00:22:42.338 END TEST nvmf_perf 00:22:42.338 ************************************ 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:42.338 14:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:42.338 14:00:40 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:42.338 14:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:42.338 14:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:42.338 14:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.338 ************************************ 00:22:42.338 START TEST nvmf_fio_host 00:22:42.338 ************************************ 00:22:42.338 14:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:42.597 * Looking for test storage... 00:22:42.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.597 14:00:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.597 14:00:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.597 14:00:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.597 14:00:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.597 14:00:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.597 14:00:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:42.598 Cannot find device "nvmf_tgt_br" 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:42.598 Cannot find device "nvmf_tgt_br2" 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:42.598 Cannot find device "nvmf_tgt_br" 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:42.598 Cannot find device "nvmf_tgt_br2" 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:22:42.598 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.903 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:43.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:22:43.178 00:22:43.178 --- 10.0.0.2 ping statistics --- 00:22:43.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.178 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:43.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:43.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:22:43.178 00:22:43.178 --- 10.0.0.3 ping statistics --- 00:22:43.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.178 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:43.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:22:43.178 00:22:43.178 --- 10.0.0.1 ping statistics --- 00:22:43.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.178 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=74106 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 74106 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 74106 ']' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:43.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:43.178 14:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 [2024-05-15 14:00:41.602802] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:43.178 [2024-05-15 14:00:41.603365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.437 [2024-05-15 14:00:41.750813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.437 [2024-05-15 14:00:41.908821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.437 [2024-05-15 14:00:41.908899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.437 [2024-05-15 14:00:41.908909] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.437 [2024-05-15 14:00:41.908919] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.437 [2024-05-15 14:00:41.908926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.438 [2024-05-15 14:00:41.909199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.438 [2024-05-15 14:00:41.909546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.438 [2024-05-15 14:00:41.910325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.438 [2024-05-15 14:00:41.910325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.005 [2024-05-15 14:00:42.534030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.005 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 Malloc1 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 [2024-05-15 14:00:42.679786] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:44.265 [2024-05-15 14:00:42.680213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:44.265 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:44.266 14:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.536 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:44.536 fio-3.35 00:22:44.536 Starting 1 thread 00:22:47.114 00:22:47.114 test: (groupid=0, jobs=1): err= 0: pid=74161: Wed May 15 14:00:45 2024 00:22:47.114 read: IOPS=8563, BW=33.4MiB/s (35.1MB/s)(67.1MiB/2007msec) 00:22:47.114 slat (nsec): min=1564, max=4028.8k, avg=2080.36, stdev=30857.89 00:22:47.114 clat (usec): min=4148, max=14128, avg=7834.51, stdev=703.62 00:22:47.114 lat (usec): min=4150, max=14129, avg=7836.59, stdev=703.09 00:22:47.114 clat percentiles (usec): 00:22:47.114 | 1.00th=[ 6259], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:22:47.114 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:22:47.114 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:22:47.114 | 99.00th=[ 9372], 99.50th=[10814], 99.90th=[12911], 99.95th=[13435], 00:22:47.114 | 99.99th=[14091] 00:22:47.114 bw ( KiB/s): min=32424, max=35960, per=99.92%, avg=34224.00, stdev=1621.75, samples=4 00:22:47.114 iops : min= 8106, max= 8990, avg=8556.00, stdev=405.44, samples=4 00:22:47.114 write: IOPS=8558, BW=33.4MiB/s (35.1MB/s)(67.1MiB/2007msec); 0 zone resets 00:22:47.114 slat (nsec): min=1623, max=203853, avg=1921.01, stdev=1896.92 00:22:47.114 clat (usec): min=3227, max=13477, avg=7082.72, stdev=671.59 00:22:47.114 lat (usec): min=3229, max=13479, avg=7084.65, stdev=671.60 00:22:47.114 clat percentiles (usec): 00:22:47.114 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:22:47.114 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:22:47.114 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:22:47.114 | 99.00th=[ 8586], 99.50th=[10552], 99.90th=[12649], 99.95th=[13042], 00:22:47.114 | 99.99th=[13435] 00:22:47.114 bw ( KiB/s): min=33024, max=36280, per=100.00%, avg=34248.00, stdev=1490.42, samples=4 00:22:47.114 iops : min= 8256, max= 9070, avg=8562.00, stdev=372.61, samples=4 00:22:47.114 lat (msec) : 4=0.04%, 10=99.33%, 20=0.62% 00:22:47.114 cpu : usr=68.69%, sys=24.78%, ctx=595, majf=0, minf=4 00:22:47.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:47.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:47.114 issued rwts: total=17186,17177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:47.114 00:22:47.114 Run status group 0 (all jobs): 00:22:47.114 READ: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=67.1MiB (70.4MB), run=2007-2007msec 00:22:47.114 WRITE: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=67.1MiB (70.4MB), run=2007-2007msec 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:47.114 14:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.114 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:47.114 fio-3.35 00:22:47.114 Starting 1 thread 00:22:49.661 00:22:49.661 test: (groupid=0, jobs=1): err= 0: pid=74215: Wed May 15 14:00:47 2024 00:22:49.661 read: IOPS=8223, BW=128MiB/s (135MB/s)(258MiB/2010msec) 00:22:49.661 slat (nsec): min=2502, max=95690, avg=3033.05, stdev=1793.82 00:22:49.661 clat (usec): min=2057, max=17327, avg=8849.78, stdev=2208.67 00:22:49.661 lat (usec): min=2060, max=17330, avg=8852.81, stdev=2208.66 00:22:49.661 clat percentiles (usec): 00:22:49.661 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6915], 00:22:49.661 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:22:49.661 | 70.00th=[10028], 80.00th=[10945], 90.00th=[11863], 95.00th=[12518], 00:22:49.661 | 99.00th=[13698], 99.50th=[14746], 99.90th=[15795], 99.95th=[15926], 00:22:49.661 | 99.99th=[17171] 00:22:49.661 bw ( KiB/s): min=57120, max=73440, per=51.21%, avg=67384.00, stdev=7500.72, samples=4 00:22:49.661 iops : min= 3570, max= 4590, avg=4211.50, stdev=468.80, samples=4 00:22:49.661 write: IOPS=4695, BW=73.4MiB/s (76.9MB/s)(137MiB/1871msec); 0 zone resets 00:22:49.661 slat (usec): min=28, max=287, avg=32.95, stdev= 6.74 00:22:49.661 clat (usec): min=5944, max=20173, avg=11622.75, stdev=2294.58 00:22:49.661 lat (usec): min=5975, max=20204, avg=11655.70, stdev=2294.53 00:22:49.661 clat percentiles (usec): 00:22:49.661 | 1.00th=[ 7242], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9634], 00:22:49.661 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:22:49.661 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14877], 95.00th=[15795], 00:22:49.661 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18744], 99.95th=[19268], 00:22:49.661 | 99.99th=[20055] 00:22:49.661 bw ( KiB/s): min=59744, max=76448, per=92.95%, avg=69840.00, stdev=7824.70, samples=4 00:22:49.661 iops : min= 3734, max= 4778, avg=4365.00, stdev=489.04, samples=4 00:22:49.661 lat (msec) : 4=0.40%, 10=54.79%, 20=44.80%, 50=0.01% 00:22:49.661 cpu : usr=78.20%, sys=18.22%, ctx=9, majf=0, minf=26 00:22:49.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:49.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.661 issued rwts: total=16529,8786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.661 00:22:49.661 Run status group 0 (all jobs): 00:22:49.661 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2010-2010msec 00:22:49.661 WRITE: bw=73.4MiB/s (76.9MB/s), 73.4MiB/s-73.4MiB/s (76.9MB/s-76.9MB/s), io=137MiB (144MB), run=1871-1871msec 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.661 rmmod nvme_tcp 00:22:49.661 rmmod nvme_fabrics 00:22:49.661 rmmod nvme_keyring 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74106 ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74106 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 74106 ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 74106 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74106 00:22:49.661 killing process with pid 74106 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74106' 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 74106 00:22:49.661 [2024-05-15 14:00:47.948250] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:49.661 14:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 74106 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:49.955 00:22:49.955 real 0m7.599s 00:22:49.955 user 0m28.146s 00:22:49.955 sys 0m2.593s 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:49.955 14:00:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.955 ************************************ 00:22:49.955 END TEST nvmf_fio_host 00:22:49.955 ************************************ 00:22:49.955 14:00:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:49.955 14:00:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:49.955 14:00:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:49.955 14:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.955 ************************************ 00:22:49.955 START TEST nvmf_failover 00:22:49.955 ************************************ 00:22:49.955 14:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:50.213 * Looking for test storage... 00:22:50.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.213 14:00:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:50.214 Cannot find device "nvmf_tgt_br" 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:50.214 Cannot find device "nvmf_tgt_br2" 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:50.214 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:50.472 Cannot find device "nvmf_tgt_br" 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:50.472 Cannot find device "nvmf_tgt_br2" 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:50.472 14:00:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:50.472 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.472 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:50.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:22:50.730 00:22:50.730 --- 10.0.0.2 ping statistics --- 00:22:50.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.730 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:50.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:50.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:22:50.730 00:22:50.730 --- 10.0.0.3 ping statistics --- 00:22:50.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.730 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:50.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:22:50.730 00:22:50.730 --- 10.0.0.1 ping statistics --- 00:22:50.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.730 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74424 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74424 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 74424 ']' 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.730 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.731 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.731 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.731 14:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.731 [2024-05-15 14:00:49.254120] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:22:50.731 [2024-05-15 14:00:49.254297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.999 [2024-05-15 14:00:49.390458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.999 [2024-05-15 14:00:49.504942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.999 [2024-05-15 14:00:49.505005] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.999 [2024-05-15 14:00:49.505015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.999 [2024-05-15 14:00:49.505024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.999 [2024-05-15 14:00:49.505032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.999 [2024-05-15 14:00:49.505182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.999 [2024-05-15 14:00:49.505308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.999 [2024-05-15 14:00:49.505392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.936 14:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:52.218 [2024-05-15 14:00:50.581663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.218 14:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:52.507 Malloc0 00:22:52.507 14:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.767 14:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.026 14:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.026 [2024-05-15 14:00:51.586305] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:53.026 [2024-05-15 14:00:51.586614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.287 14:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:53.287 [2024-05-15 14:00:51.802348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:53.287 14:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:53.546 [2024-05-15 14:00:52.026181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74482 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74482 /var/tmp/bdevperf.sock 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 74482 ']' 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.546 14:00:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.500 14:00:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.500 14:00:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:54.500 14:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.765 NVMe0n1 00:22:54.765 14:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.023 00:22:55.281 14:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.281 14:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74504 00:22:55.281 14:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:56.216 14:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.475 14:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:59.761 14:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.761 00:22:59.761 14:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.761 14:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:03.043 14:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.043 [2024-05-15 14:01:01.472326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.043 14:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:03.989 14:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:04.247 14:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 74504 00:23:10.826 0 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 74482 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 74482 ']' 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 74482 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74482 00:23:10.826 killing process with pid 74482 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74482' 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 74482 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 74482 00:23:10.826 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:10.826 [2024-05-15 14:00:52.096319] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:10.826 [2024-05-15 14:00:52.096415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74482 ] 00:23:10.826 [2024-05-15 14:00:52.240684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.826 [2024-05-15 14:00:52.366932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.826 Running I/O for 15 seconds... 00:23:10.826 [2024-05-15 14:00:54.799953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.826 [2024-05-15 14:00:54.800822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-05-15 14:00:54.800851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.826 [2024-05-15 14:00:54.800867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.800881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.800896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.800910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.800926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.800940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.800955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.800984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.800998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.827 [2024-05-15 14:00:54.801806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.802005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.802019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.802049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.802064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.802081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.802096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.827 [2024-05-15 14:00:54.802111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-05-15 14:00:54.802126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.802797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.802975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.802990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.803003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.803018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.828 [2024-05-15 14:00:54.803031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.803046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.803061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.828 [2024-05-15 14:00:54.803076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.828 [2024-05-15 14:00:54.803090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.829 [2024-05-15 14:00:54.803118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.829 [2024-05-15 14:00:54.803152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.829 [2024-05-15 14:00:54.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.829 [2024-05-15 14:00:54.803210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.829 [2024-05-15 14:00:54.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ba50 is same with the state(5) to be set 00:23:10.829 [2024-05-15 14:00:54.803272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93688 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94016 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94024 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94032 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94040 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94048 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94056 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94064 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94072 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94080 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94088 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94096 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.829 [2024-05-15 14:00:54.803873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.829 [2024-05-15 14:00:54.803883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.829 [2024-05-15 14:00:54.803898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94104 len:8 PRP1 0x0 PRP2 0x0 00:23:10.829 [2024-05-15 14:00:54.803911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.803935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.803945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.803956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94112 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.803970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.803984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.803994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94120 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94128 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94136 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94144 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94152 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94160 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94168 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94176 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94184 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94192 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.830 [2024-05-15 14:00:54.804483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.830 [2024-05-15 14:00:54.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94200 len:8 PRP1 0x0 PRP2 0x0 00:23:10.830 [2024-05-15 14:00:54.804506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804601] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x213ba50 was disconnected and freed. reset controller. 00:23:10.830 [2024-05-15 14:00:54.804620] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:10.830 [2024-05-15 14:00:54.804694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.830 [2024-05-15 14:00:54.804710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.830 [2024-05-15 14:00:54.804739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.830 [2024-05-15 14:00:54.804776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.830 [2024-05-15 14:00:54.804804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.830 [2024-05-15 14:00:54.804818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.830 [2024-05-15 14:00:54.821381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cc090 (9): Bad file descriptor 00:23:10.830 [2024-05-15 14:00:54.825664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.830 [2024-05-15 14:00:54.855363] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.830 [2024-05-15 14:00:58.267330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.831 [2024-05-15 14:00:58.267597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.831 [2024-05-15 14:00:58.267954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.831 [2024-05-15 14:00:58.267968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.267994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.832 [2024-05-15 14:00:58.268489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.832 [2024-05-15 14:00:58.268698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.832 [2024-05-15 14:00:58.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.268920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.268946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.268972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.268986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.268998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.833 [2024-05-15 14:00:58.269344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-05-15 14:00:58.269453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-05-15 14:00:58.269475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.834 [2024-05-15 14:00:58.269789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.269984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.269997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-05-15 14:00:58.270227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-05-15 14:00:58.270240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-05-15 14:00:58.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213f9f0 is same with the state(5) to be set 00:23:10.835 [2024-05-15 14:00:58.270426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9816 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10216 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10224 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10232 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10248 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10256 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10264 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10280 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10288 len:8 PRP1 0x0 PRP2 0x0 00:23:10.835 [2024-05-15 14:00:58.270951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-05-15 14:00:58.270963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.835 [2024-05-15 14:00:58.270973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.835 [2024-05-15 14:00:58.270982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10296 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-05-15 14:00:58.270994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.836 [2024-05-15 14:00:58.271016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.836 [2024-05-15 14:00:58.271025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-05-15 14:00:58.271037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.836 [2024-05-15 14:00:58.271058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.836 [2024-05-15 14:00:58.271067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10312 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-05-15 14:00:58.271079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.836 [2024-05-15 14:00:58.271100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.836 [2024-05-15 14:00:58.271109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10320 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-05-15 14:00:58.271121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.836 [2024-05-15 14:00:58.271151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.836 [2024-05-15 14:00:58.271161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10328 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-05-15 14:00:58.271173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271223] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x213f9f0 was disconnected and freed. reset controller. 00:23:10.836 [2024-05-15 14:00:58.271239] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:10.836 [2024-05-15 14:00:58.271286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:00:58.271301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:00:58.271327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:00:58.271352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:00:58.271377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:00:58.271389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.836 [2024-05-15 14:00:58.271433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cc090 (9): Bad file descriptor 00:23:10.836 [2024-05-15 14:00:58.274176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.836 [2024-05-15 14:00:58.307952] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.836 [2024-05-15 14:01:02.672412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:01:02.672479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.672495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:01:02.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.672521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:01:02.672534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.672546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-05-15 14:01:02.672558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.672571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cc090 is same with the state(5) to be set 00:23:10.836 [2024-05-15 14:01:02.673040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-05-15 14:01:02.673339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-05-15 14:01:02.673352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.837 [2024-05-15 14:01:02.673945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.673985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.673997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.674010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.674022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.674037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.674049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.674062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.674075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.674088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.837 [2024-05-15 14:01:02.674101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.837 [2024-05-15 14:01:02.674114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.674798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.674979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.674993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.838 [2024-05-15 14:01:02.675005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.838 [2024-05-15 14:01:02.675167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.838 [2024-05-15 14:01:02.675181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.839 [2024-05-15 14:01:02.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.675978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.675991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.676017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.839 [2024-05-15 14:01:02.676043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3630 is same with the state(5) to be set 00:23:10.839 [2024-05-15 14:01:02.676071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.839 [2024-05-15 14:01:02.676080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.839 [2024-05-15 14:01:02.676089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32648 len:8 PRP1 0x0 PRP2 0x0 00:23:10.839 [2024-05-15 14:01:02.676101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.839 [2024-05-15 14:01:02.676123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.839 [2024-05-15 14:01:02.676137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33104 len:8 PRP1 0x0 PRP2 0x0 00:23:10.839 [2024-05-15 14:01:02.676149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.839 [2024-05-15 14:01:02.676177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.839 [2024-05-15 14:01:02.676187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33112 len:8 PRP1 0x0 PRP2 0x0 00:23:10.839 [2024-05-15 14:01:02.676199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.839 [2024-05-15 14:01:02.676220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.839 [2024-05-15 14:01:02.676230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33120 len:8 PRP1 0x0 PRP2 0x0 00:23:10.839 [2024-05-15 14:01:02.676242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.839 [2024-05-15 14:01:02.676254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.839 [2024-05-15 14:01:02.676263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.839 [2024-05-15 14:01:02.676272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33128 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33136 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33144 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33152 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33160 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33168 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33176 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33184 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33192 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33200 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33208 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33216 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.840 [2024-05-15 14:01:02.676792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.840 [2024-05-15 14:01:02.676801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33224 len:8 PRP1 0x0 PRP2 0x0 00:23:10.840 [2024-05-15 14:01:02.676813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.840 [2024-05-15 14:01:02.676866] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f3630 was disconnected and freed. reset controller. 00:23:10.840 [2024-05-15 14:01:02.676881] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:10.840 [2024-05-15 14:01:02.676893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.840 [2024-05-15 14:01:02.679645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.840 [2024-05-15 14:01:02.679687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cc090 (9): Bad file descriptor 00:23:10.840 [2024-05-15 14:01:02.707845] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.840 00:23:10.840 Latency(us) 00:23:10.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.840 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.840 Verification LBA range: start 0x0 length 0x4000 00:23:10.840 NVMe0n1 : 15.01 11237.16 43.90 247.15 0.00 11121.61 440.85 28635.81 00:23:10.840 =================================================================================================================== 00:23:10.840 Total : 11237.16 43.90 247.15 0.00 11121.61 440.85 28635.81 00:23:10.840 Received shutdown signal, test time was about 15.000000 seconds 00:23:10.840 00:23:10.840 Latency(us) 00:23:10.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.840 =================================================================================================================== 00:23:10.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74678 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74678 /var/tmp/bdevperf.sock 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 74678 ']' 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.840 14:01:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.407 14:01:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.407 14:01:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:11.407 14:01:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.666 [2024-05-15 14:01:10.025777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.666 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:11.666 [2024-05-15 14:01:10.209704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:11.925 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.925 NVMe0n1 00:23:12.229 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.229 00:23:12.229 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.488 00:23:12.488 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.488 14:01:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:12.746 14:01:11 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:13.005 14:01:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:16.291 14:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.291 14:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:16.291 14:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74755 00:23:16.291 14:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.291 14:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 74755 00:23:17.229 0 00:23:17.229 14:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:17.229 [2024-05-15 14:01:09.016673] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:17.229 [2024-05-15 14:01:09.017530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74678 ] 00:23:17.229 [2024-05-15 14:01:09.164561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.229 [2024-05-15 14:01:09.259599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.229 [2024-05-15 14:01:11.335178] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:17.229 [2024-05-15 14:01:11.335346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.229 [2024-05-15 14:01:11.335368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.229 [2024-05-15 14:01:11.335387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.229 [2024-05-15 14:01:11.335399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.229 [2024-05-15 14:01:11.335413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.229 [2024-05-15 14:01:11.335426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.229 [2024-05-15 14:01:11.335439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.229 [2024-05-15 14:01:11.335452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.229 [2024-05-15 14:01:11.335466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.229 [2024-05-15 14:01:11.335524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.229 [2024-05-15 14:01:11.335556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b090 (9): Bad file descriptor 00:23:17.229 [2024-05-15 14:01:11.341077] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.229 Running I/O for 1 seconds... 00:23:17.229 00:23:17.229 Latency(us) 00:23:17.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.229 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:17.229 Verification LBA range: start 0x0 length 0x4000 00:23:17.229 NVMe0n1 : 1.01 10522.93 41.11 0.00 0.00 12089.54 1065.95 18107.94 00:23:17.229 =================================================================================================================== 00:23:17.229 Total : 10522.93 41.11 0.00 0.00 12089.54 1065.95 18107.94 00:23:17.229 14:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.229 14:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:17.488 14:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.748 14:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:17.748 14:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.748 14:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.007 14:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 74678 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 74678 ']' 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 74678 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74678 00:23:21.293 killing process with pid 74678 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74678' 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 74678 00:23:21.293 14:01:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 74678 00:23:21.552 14:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:21.552 14:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.810 rmmod nvme_tcp 00:23:21.810 rmmod nvme_fabrics 00:23:21.810 rmmod nvme_keyring 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74424 ']' 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74424 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 74424 ']' 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 74424 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:21.810 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74424 00:23:22.069 killing process with pid 74424 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74424' 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 74424 00:23:22.069 [2024-05-15 14:01:20.381523] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 74424 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.069 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.328 14:01:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:22.328 00:23:22.328 real 0m32.153s 00:23:22.328 user 2m1.376s 00:23:22.328 sys 0m6.666s 00:23:22.328 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:22.328 14:01:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.328 ************************************ 00:23:22.328 END TEST nvmf_failover 00:23:22.328 ************************************ 00:23:22.328 14:01:20 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.328 14:01:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:22.328 14:01:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:22.328 14:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.328 ************************************ 00:23:22.328 START TEST nvmf_host_discovery 00:23:22.328 ************************************ 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.328 * Looking for test storage... 00:23:22.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.328 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:22.587 Cannot find device "nvmf_tgt_br" 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.587 Cannot find device "nvmf_tgt_br2" 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:22.587 Cannot find device "nvmf_tgt_br" 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:23:22.587 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:22.588 Cannot find device "nvmf_tgt_br2" 00:23:22.588 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:23:22.588 14:01:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.588 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:22.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:23:22.846 00:23:22.846 --- 10.0.0.2 ping statistics --- 00:23:22.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.846 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:22.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:22.846 00:23:22.846 --- 10.0.0.3 ping statistics --- 00:23:22.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.846 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:23:22.846 00:23:22.846 --- 10.0.0.1 ping statistics --- 00:23:22.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.846 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:22.846 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75019 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75019 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 75019 ']' 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.104 14:01:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.104 [2024-05-15 14:01:21.458986] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:23.104 [2024-05-15 14:01:21.459065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.104 [2024-05-15 14:01:21.600157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.362 [2024-05-15 14:01:21.702811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.362 [2024-05-15 14:01:21.702861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.362 [2024-05-15 14:01:21.702871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.362 [2024-05-15 14:01:21.702879] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.362 [2024-05-15 14:01:21.702886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.362 [2024-05-15 14:01:21.702912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 [2024-05-15 14:01:22.355161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 [2024-05-15 14:01:22.367090] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:23.929 [2024-05-15 14:01:22.367289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 null0 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 null1 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75051 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75051 /tmp/host.sock 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 75051 ']' 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:23.929 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.929 14:01:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.929 [2024-05-15 14:01:22.459462] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:23.929 [2024-05-15 14:01:22.459881] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75051 ] 00:23:24.187 [2024-05-15 14:01:22.599000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.187 [2024-05-15 14:01:22.701971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.755 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.014 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 [2024-05-15 14:01:23.665634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.274 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.275 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.533 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.533 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:23:25.533 14:01:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:23:25.792 [2024-05-15 14:01:24.314522] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:25.792 [2024-05-15 14:01:24.314561] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:25.792 [2024-05-15 14:01:24.314576] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.792 [2024-05-15 14:01:24.320548] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:26.050 [2024-05-15 14:01:24.376408] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.050 [2024-05-15 14:01:24.376439] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.616 14:01:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.617 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.875 [2024-05-15 14:01:25.204617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.875 [2024-05-15 14:01:25.205520] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:26.875 [2024-05-15 14:01:25.205552] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:26.875 [2024-05-15 14:01:25.211497] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.875 [2024-05-15 14:01:25.275638] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.875 [2024-05-15 14:01:25.275661] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.875 [2024-05-15 14:01:25.275668] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.875 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.876 [2024-05-15 14:01:25.421505] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:26.876 [2024-05-15 14:01:25.421699] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.876 [2024-05-15 14:01:25.422140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.876 [2024-05-15 14:01:25.422263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-05-15 14:01:25.422280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.876 [2024-05-15 14:01:25.422291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-05-15 14:01:25.422302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.876 [2024-05-15 14:01:25.422312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-05-15 14:01:25.422322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.876 [2024-05-15 14:01:25.422332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-05-15 14:01:25.422342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec73c0 is same with the state(5) to be set 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.876 [2024-05-15 14:01:25.427482] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:26.876 [2024-05-15 14:01:25.427505] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.876 [2024-05-15 14:01:25.427556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec73c0 (9): Bad file descriptor 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.876 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.135 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.136 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.395 14:01:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.331 [2024-05-15 14:01:26.841023] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.331 [2024-05-15 14:01:26.841059] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.331 [2024-05-15 14:01:26.841073] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.331 [2024-05-15 14:01:26.847037] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:28.590 [2024-05-15 14:01:26.906139] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:28.590 [2024-05-15 14:01:26.906356] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.590 request: 00:23:28.590 { 00:23:28.590 "name": "nvme", 00:23:28.590 "trtype": "tcp", 00:23:28.590 "traddr": "10.0.0.2", 00:23:28.590 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:28.590 "adrfam": "ipv4", 00:23:28.590 "trsvcid": "8009", 00:23:28.590 "wait_for_attach": true, 00:23:28.590 "method": "bdev_nvme_start_discovery", 00:23:28.590 "req_id": 1 00:23:28.590 } 00:23:28.590 Got JSON-RPC error response 00:23:28.590 response: 00:23:28.590 { 00:23:28.590 "code": -17, 00:23:28.590 "message": "File exists" 00:23:28.590 } 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.590 14:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.590 request: 00:23:28.590 { 00:23:28.590 "name": "nvme_second", 00:23:28.590 "trtype": "tcp", 00:23:28.590 "traddr": "10.0.0.2", 00:23:28.590 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:28.590 "adrfam": "ipv4", 00:23:28.590 "trsvcid": "8009", 00:23:28.590 "wait_for_attach": true, 00:23:28.590 "method": "bdev_nvme_start_discovery", 00:23:28.590 "req_id": 1 00:23:28.590 } 00:23:28.590 Got JSON-RPC error response 00:23:28.590 response: 00:23:28.590 { 00:23:28.590 "code": -17, 00:23:28.590 "message": "File exists" 00:23:28.590 } 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.590 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.849 14:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.787 [2024-05-15 14:01:28.170214] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.787 [2024-05-15 14:01:28.170331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.787 [2024-05-15 14:01:28.170365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.787 [2024-05-15 14:01:28.170377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d840 with addr=10.0.0.2, port=8010 00:23:29.787 [2024-05-15 14:01:28.170398] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:29.787 [2024-05-15 14:01:28.170407] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:29.787 [2024-05-15 14:01:28.170416] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:30.806 [2024-05-15 14:01:29.168587] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.806 [2024-05-15 14:01:29.168678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.806 [2024-05-15 14:01:29.168709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.806 [2024-05-15 14:01:29.168720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6dab0 with addr=10.0.0.2, port=8010 00:23:30.807 [2024-05-15 14:01:29.168755] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:30.807 [2024-05-15 14:01:29.168765] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:30.807 [2024-05-15 14:01:29.168773] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:31.744 [2024-05-15 14:01:30.166826] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:31.744 request: 00:23:31.744 { 00:23:31.744 "name": "nvme_second", 00:23:31.744 "trtype": "tcp", 00:23:31.744 "traddr": "10.0.0.2", 00:23:31.744 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.744 "adrfam": "ipv4", 00:23:31.744 "trsvcid": "8010", 00:23:31.744 "attach_timeout_ms": 3000, 00:23:31.744 "method": "bdev_nvme_start_discovery", 00:23:31.744 "req_id": 1 00:23:31.744 } 00:23:31.744 Got JSON-RPC error response 00:23:31.744 response: 00:23:31.744 { 00:23:31.744 "code": -110, 00:23:31.744 "message": "Connection timed out" 00:23:31.744 } 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75051 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.744 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.744 rmmod nvme_tcp 00:23:31.744 rmmod nvme_fabrics 00:23:32.003 rmmod nvme_keyring 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75019 ']' 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75019 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 75019 ']' 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 75019 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75019 00:23:32.003 killing process with pid 75019 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75019' 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 75019 00:23:32.003 [2024-05-15 14:01:30.387553] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:32.003 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 75019 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:32.262 00:23:32.262 real 0m9.919s 00:23:32.262 user 0m18.198s 00:23:32.262 sys 0m2.559s 00:23:32.262 ************************************ 00:23:32.262 END TEST nvmf_host_discovery 00:23:32.262 ************************************ 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.262 14:01:30 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:32.262 14:01:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:32.262 14:01:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.262 14:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.262 ************************************ 00:23:32.262 START TEST nvmf_host_multipath_status 00:23:32.262 ************************************ 00:23:32.262 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:32.521 * Looking for test storage... 00:23:32.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.521 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:32.522 Cannot find device "nvmf_tgt_br" 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:32.522 Cannot find device "nvmf_tgt_br2" 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:32.522 14:01:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:32.522 Cannot find device "nvmf_tgt_br" 00:23:32.522 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:23:32.522 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:32.522 Cannot find device "nvmf_tgt_br2" 00:23:32.522 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:23:32.522 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:32.522 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:32.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:32.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:32.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:23:32.781 00:23:32.781 --- 10.0.0.2 ping statistics --- 00:23:32.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.781 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:32.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:32.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:23:32.781 00:23:32.781 --- 10.0.0.3 ping statistics --- 00:23:32.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.781 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:32.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:23:32.781 00:23:32.781 --- 10.0.0.1 ping statistics --- 00:23:32.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.781 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:23:32.781 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.782 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=75511 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 75511 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 75511 ']' 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.040 14:01:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [2024-05-15 14:01:31.398954] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:23:33.040 [2024-05-15 14:01:31.399026] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.040 [2024-05-15 14:01:31.530115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:33.299 [2024-05-15 14:01:31.631207] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.299 [2024-05-15 14:01:31.631429] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.299 [2024-05-15 14:01:31.631523] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.299 [2024-05-15 14:01:31.631572] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.299 [2024-05-15 14:01:31.631599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.299 [2024-05-15 14:01:31.631821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.299 [2024-05-15 14:01:31.631823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75511 00:23:33.867 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:34.125 [2024-05-15 14:01:32.510255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.125 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:34.385 Malloc0 00:23:34.385 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:34.385 14:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.659 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.927 [2024-05-15 14:01:33.282601] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:34.927 [2024-05-15 14:01:33.282862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.927 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.927 [2024-05-15 14:01:33.474561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75561 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75561 /var/tmp/bdevperf.sock 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 75561 ']' 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:35.186 14:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:36.123 14:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.123 14:01:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:36.123 14:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:36.123 14:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:36.690 Nvme0n1 00:23:36.690 14:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:36.690 Nvme0n1 00:23:36.948 14:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:36.948 14:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.849 14:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:38.849 14:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:39.107 14:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:39.107 14:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.482 14:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.741 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.741 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.741 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.742 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.742 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.742 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.742 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.742 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.001 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.001 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.001 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.001 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.260 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.260 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.260 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.260 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.536 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.536 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:41.536 14:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:41.536 14:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.809 14:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:42.744 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:42.744 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:42.744 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.744 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.003 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.003 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:43.003 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.003 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.260 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.260 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.260 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.260 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.519 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.519 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.519 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.519 14:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.519 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.519 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.519 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.519 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.777 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.777 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.777 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.777 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.035 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.035 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:44.035 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:44.293 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:44.293 14:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.671 14:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.671 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.932 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.932 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.932 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.932 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.190 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.190 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:46.190 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.190 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.448 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.448 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:46.449 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.449 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.449 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.449 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:46.449 14:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.707 14:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.966 14:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:47.903 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:47.903 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:47.903 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.903 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.161 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.161 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:48.161 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.161 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.419 14:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.678 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.678 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.678 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.678 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.937 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.937 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.937 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.937 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.196 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.196 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:49.196 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.196 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:49.454 14:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:50.391 14:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:50.391 14:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.391 14:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.391 14:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.650 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.650 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:50.650 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.650 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.909 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.909 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.909 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.909 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.167 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.167 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.167 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.167 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.426 14:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.685 14:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.685 14:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:51.685 14:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:51.943 14:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.201 14:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:53.134 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:53.134 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:53.134 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.134 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.392 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.392 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.392 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.392 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.650 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.650 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.650 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.650 14:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.650 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.650 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.650 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.650 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.909 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.909 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:53.909 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.909 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.168 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.168 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.168 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.168 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.427 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.427 14:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:54.686 14:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:54.686 14:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:54.686 14:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.944 14:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:55.878 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:55.878 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.878 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.878 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.142 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.142 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:56.142 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.142 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.433 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.433 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.433 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.433 14:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.694 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.011 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.011 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.011 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.011 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.268 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.268 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:57.268 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:57.268 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.525 14:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:58.456 14:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:58.456 14:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:58.456 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.456 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.713 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.713 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.713 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.713 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.971 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.971 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.971 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.971 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.229 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.229 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.229 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.229 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.487 14:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.744 14:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.744 14:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:59.744 14:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.033 14:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:00.033 14:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.407 14:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.665 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.665 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.665 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.665 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.923 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.181 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.181 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.181 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.181 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.439 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.439 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:02.439 14:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.699 14:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.699 14:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.073 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.330 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.330 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.330 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.330 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.588 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.588 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.588 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.588 14:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.588 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.588 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.588 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.588 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.847 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.847 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.847 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.847 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75561 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 75561 ']' 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 75561 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75561 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:05.106 killing process with pid 75561 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75561' 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 75561 00:24:05.106 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 75561 00:24:05.371 Connection closed with partial response: 00:24:05.371 00:24:05.371 00:24:05.371 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75561 00:24:05.371 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:05.371 [2024-05-15 14:01:33.538091] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:05.371 [2024-05-15 14:01:33.538184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75561 ] 00:24:05.371 [2024-05-15 14:01:33.680501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.371 [2024-05-15 14:01:33.780989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.371 Running I/O for 90 seconds... 00:24:05.371 [2024-05-15 14:01:47.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.371 [2024-05-15 14:01:47.729780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.371 [2024-05-15 14:01:47.729797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.729810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.729840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.729870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.729901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.729950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.729981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.729999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.372 [2024-05-15 14:01:47.730864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.730978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.730991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.731008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.731021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.731039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.372 [2024-05-15 14:01:47.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.372 [2024-05-15 14:01:47.731070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.373 [2024-05-15 14:01:47.731616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.731992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.373 [2024-05-15 14:01:47.732309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.373 [2024-05-15 14:01:47.732321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.732714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.732727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.733849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.733879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.733903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.733916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.733935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.733948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.733966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.733979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.374 [2024-05-15 14:01:47.734524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.374 [2024-05-15 14:01:47.734701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.374 [2024-05-15 14:01:47.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.734753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.734785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.734970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.734983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.375 [2024-05-15 14:01:47.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.735861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.735874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.753663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.753747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.753766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.753791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.753808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.375 [2024-05-15 14:01:47.753833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.375 [2024-05-15 14:01:47.753851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.753875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.753892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.753916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.753933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.753975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.753998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.754285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.377 [2024-05-15 14:01:47.754965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.754989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.377 [2024-05-15 14:01:47.755525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.377 [2024-05-15 14:01:47.755542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.755972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.755995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.756012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.756054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.756094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.756142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.756370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.756387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.757949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.757980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.378 [2024-05-15 14:01:47.758365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.378 [2024-05-15 14:01:47.758751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.378 [2024-05-15 14:01:47.758769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.758810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.758851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.758932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.758973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.758997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.379 [2024-05-15 14:01:47.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.759974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.759991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.379 [2024-05-15 14:01:47.760449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.379 [2024-05-15 14:01:47.760479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.760498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.760596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.760645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.760953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.760981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.761449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.761973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.380 [2024-05-15 14:01:47.762290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.762338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.380 [2024-05-15 14:01:47.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.380 [2024-05-15 14:01:47.762386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.762968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.762988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:01:47.763667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.763695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.763715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.772466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.772539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.772614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.772653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.772682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.772721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.772772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:01:47.773571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.381 [2024-05-15 14:01:47.773636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.381 [2024-05-15 14:02:01.223117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.381 [2024-05-15 14:02:01.223211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.223902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.223979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.223991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.382 [2024-05-15 14:02:01.224546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.382 [2024-05-15 14:02:01.224594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.382 [2024-05-15 14:02:01.224609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.224748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.224779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.224810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.224944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.224976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.224994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.225025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.225038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.225056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.225069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.225087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.383 [2024-05-15 14:02:01.225100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.226088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.226119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.226143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.226157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.226175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.226189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.383 [2024-05-15 14:02:01.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.383 [2024-05-15 14:02:01.226224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.383 Received shutdown signal, test time was about 28.245354 seconds 00:24:05.383 00:24:05.383 Latency(us) 00:24:05.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.383 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.383 Verification LBA range: start 0x0 length 0x4000 00:24:05.383 Nvme0n1 : 28.24 11347.52 44.33 0.00 0.00 11258.33 579.03 3058978.34 00:24:05.383 =================================================================================================================== 00:24:05.383 Total : 11347.52 44.33 0.00 0.00 11258.33 579.03 3058978.34 00:24:05.383 14:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.642 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.642 rmmod nvme_tcp 00:24:05.642 rmmod nvme_fabrics 00:24:05.642 rmmod nvme_keyring 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 75511 ']' 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 75511 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 75511 ']' 00:24:05.901 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 75511 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75511 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:05.902 killing process with pid 75511 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75511' 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 75511 00:24:05.902 [2024-05-15 14:02:04.260849] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:05.902 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 75511 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:06.161 00:24:06.161 real 0m33.959s 00:24:06.161 user 1m44.089s 00:24:06.161 sys 0m12.446s 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:06.161 14:02:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.161 ************************************ 00:24:06.161 END TEST nvmf_host_multipath_status 00:24:06.161 ************************************ 00:24:06.419 14:02:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:06.419 14:02:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:06.419 14:02:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:06.419 14:02:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.419 ************************************ 00:24:06.419 START TEST nvmf_discovery_remove_ifc 00:24:06.419 ************************************ 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:06.419 * Looking for test storage... 00:24:06.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.419 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:06.420 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:06.679 Cannot find device "nvmf_tgt_br" 00:24:06.679 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:24:06.679 14:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:06.679 Cannot find device "nvmf_tgt_br2" 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:06.679 Cannot find device "nvmf_tgt_br" 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:06.679 Cannot find device "nvmf_tgt_br2" 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:06.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:06.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:06.679 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:06.946 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:06.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:24:06.947 00:24:06.947 --- 10.0.0.2 ping statistics --- 00:24:06.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.947 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:06.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:06.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:24:06.947 00:24:06.947 --- 10.0.0.3 ping statistics --- 00:24:06.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.947 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:06.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:06.947 00:24:06.947 --- 10.0.0.1 ping statistics --- 00:24:06.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.947 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76290 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76290 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 76290 ']' 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.947 14:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.947 [2024-05-15 14:02:05.433260] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:06.947 [2024-05-15 14:02:05.433345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.220 [2024-05-15 14:02:05.574120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.220 [2024-05-15 14:02:05.672858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.220 [2024-05-15 14:02:05.672908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.220 [2024-05-15 14:02:05.672917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.220 [2024-05-15 14:02:05.672926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.220 [2024-05-15 14:02:05.672933] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.220 [2024-05-15 14:02:05.672957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.156 [2024-05-15 14:02:06.451876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.156 [2024-05-15 14:02:06.459820] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:08.156 [2024-05-15 14:02:06.460021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:08.156 null0 00:24:08.156 [2024-05-15 14:02:06.495924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76325 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76325 /tmp/host.sock 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 76325 ']' 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:08.156 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:08.156 14:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.156 [2024-05-15 14:02:06.565895] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:08.156 [2024-05-15 14:02:06.565976] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 00:24:08.156 [2024-05-15 14:02:06.697214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.414 [2024-05-15 14:02:06.849581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.979 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.238 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.238 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:09.238 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.238 14:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.175 [2024-05-15 14:02:08.593155] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:10.175 [2024-05-15 14:02:08.593208] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:10.175 [2024-05-15 14:02:08.593225] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:10.175 [2024-05-15 14:02:08.599186] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:10.175 [2024-05-15 14:02:08.655758] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:10.175 [2024-05-15 14:02:08.655854] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:10.175 [2024-05-15 14:02:08.655882] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:10.175 [2024-05-15 14:02:08.655903] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:10.175 [2024-05-15 14:02:08.655936] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.175 [2024-05-15 14:02:08.661824] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24eaee0 was disconnected and freed. delete nvme_qpair. 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.175 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.433 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.433 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:10.433 14:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.367 14:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.303 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.562 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.562 14:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.497 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.497 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.497 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.498 14:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.434 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.694 14:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.694 14:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.629 14:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.629 [2024-05-15 14:02:14.074051] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:15.629 [2024-05-15 14:02:14.074113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.629 [2024-05-15 14:02:14.074126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.629 [2024-05-15 14:02:14.074139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.629 [2024-05-15 14:02:14.074148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.629 [2024-05-15 14:02:14.074157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.629 [2024-05-15 14:02:14.074167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.629 [2024-05-15 14:02:14.074176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.629 [2024-05-15 14:02:14.074184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.629 [2024-05-15 14:02:14.074194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.629 [2024-05-15 14:02:14.074203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.629 [2024-05-15 14:02:14.074212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451da0 is same with the state(5) to be set 00:24:15.629 [2024-05-15 14:02:14.084025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451da0 (9): Bad file descriptor 00:24:15.629 [2024-05-15 14:02:14.094031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.566 14:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.566 [2024-05-15 14:02:15.102821] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:24:17.944 [2024-05-15 14:02:16.126823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:18.880 [2024-05-15 14:02:17.150818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:18.880 [2024-05-15 14:02:17.150984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451da0 with addr=10.0.0.2, port=4420 00:24:18.880 [2024-05-15 14:02:17.151029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451da0 is same with the state(5) to be set 00:24:18.880 [2024-05-15 14:02:17.151965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451da0 (9): Bad file descriptor 00:24:18.880 [2024-05-15 14:02:17.152052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.880 [2024-05-15 14:02:17.152107] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:18.880 [2024-05-15 14:02:17.152182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.880 [2024-05-15 14:02:17.152215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.880 [2024-05-15 14:02:17.152249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.880 [2024-05-15 14:02:17.152276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.880 [2024-05-15 14:02:17.152303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.880 [2024-05-15 14:02:17.152329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.880 [2024-05-15 14:02:17.152356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.880 [2024-05-15 14:02:17.152384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.880 [2024-05-15 14:02:17.152412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.881 [2024-05-15 14:02:17.152437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.881 [2024-05-15 14:02:17.152463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:18.881 [2024-05-15 14:02:17.152524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451610 (9): Bad file descriptor 00:24:18.881 [2024-05-15 14:02:17.153531] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:18.881 [2024-05-15 14:02:17.153582] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:18.881 14:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.881 14:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.881 14:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:19.819 14:02:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.755 [2024-05-15 14:02:19.160592] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:20.755 [2024-05-15 14:02:19.160632] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:20.755 [2024-05-15 14:02:19.160648] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:20.755 [2024-05-15 14:02:19.166617] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:20.755 [2024-05-15 14:02:19.221977] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:20.755 [2024-05-15 14:02:19.222253] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:20.755 [2024-05-15 14:02:19.222311] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:20.755 [2024-05-15 14:02:19.222403] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:20.755 [2024-05-15 14:02:19.222503] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:20.756 [2024-05-15 14:02:19.229311] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24f85c0 was disconnected and freed. delete nvme_qpair. 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76325 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 76325 ']' 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 76325 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76325 00:24:21.014 killing process with pid 76325 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76325' 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 76325 00:24:21.014 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 76325 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.273 rmmod nvme_tcp 00:24:21.273 rmmod nvme_fabrics 00:24:21.273 rmmod nvme_keyring 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76290 ']' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76290 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 76290 ']' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 76290 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76290 00:24:21.273 killing process with pid 76290 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76290' 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 76290 00:24:21.273 [2024-05-15 14:02:19.774601] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:21.273 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 76290 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.532 14:02:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.532 14:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:21.532 ************************************ 00:24:21.532 END TEST nvmf_discovery_remove_ifc 00:24:21.532 ************************************ 00:24:21.532 00:24:21.532 real 0m15.279s 00:24:21.532 user 0m23.522s 00:24:21.532 sys 0m3.403s 00:24:21.532 14:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:21.532 14:02:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.791 14:02:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:21.791 14:02:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:21.791 14:02:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:21.791 14:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.791 ************************************ 00:24:21.791 START TEST nvmf_identify_kernel_target 00:24:21.791 ************************************ 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:21.791 * Looking for test storage... 00:24:21.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.791 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:21.792 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:22.051 Cannot find device "nvmf_tgt_br" 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:22.051 Cannot find device "nvmf_tgt_br2" 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:22.051 Cannot find device "nvmf_tgt_br" 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:22.051 Cannot find device "nvmf_tgt_br2" 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:22.051 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:22.051 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:22.051 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:22.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:22.310 00:24:22.310 --- 10.0.0.2 ping statistics --- 00:24:22.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.310 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:22.310 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:22.310 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:24:22.310 00:24:22.310 --- 10.0.0.3 ping statistics --- 00:24:22.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.310 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:22.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:24:22.310 00:24:22.310 --- 10.0.0.1 ping statistics --- 00:24:22.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.310 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:22.310 14:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:22.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.877 Waiting for block devices as requested 00:24:22.877 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:23.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:23.135 No valid GPT data, bailing 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:23.135 No valid GPT data, bailing 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:23.135 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:23.395 No valid GPT data, bailing 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:23.395 No valid GPT data, bailing 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:23.395 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -a 10.0.0.1 -t tcp -s 4420 00:24:23.395 00:24:23.395 Discovery Log Number of Records 2, Generation counter 2 00:24:23.395 =====Discovery Log Entry 0====== 00:24:23.395 trtype: tcp 00:24:23.395 adrfam: ipv4 00:24:23.395 subtype: current discovery subsystem 00:24:23.395 treq: not specified, sq flow control disable supported 00:24:23.395 portid: 1 00:24:23.395 trsvcid: 4420 00:24:23.395 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:23.395 traddr: 10.0.0.1 00:24:23.395 eflags: none 00:24:23.395 sectype: none 00:24:23.395 =====Discovery Log Entry 1====== 00:24:23.395 trtype: tcp 00:24:23.395 adrfam: ipv4 00:24:23.395 subtype: nvme subsystem 00:24:23.395 treq: not specified, sq flow control disable supported 00:24:23.395 portid: 1 00:24:23.395 trsvcid: 4420 00:24:23.395 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:23.395 traddr: 10.0.0.1 00:24:23.396 eflags: none 00:24:23.396 sectype: none 00:24:23.396 14:02:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:23.396 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:23.656 ===================================================== 00:24:23.656 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:23.656 ===================================================== 00:24:23.656 Controller Capabilities/Features 00:24:23.656 ================================ 00:24:23.656 Vendor ID: 0000 00:24:23.656 Subsystem Vendor ID: 0000 00:24:23.656 Serial Number: 4da33b8dfeee1f34f6a9 00:24:23.656 Model Number: Linux 00:24:23.656 Firmware Version: 6.7.0-68 00:24:23.656 Recommended Arb Burst: 0 00:24:23.656 IEEE OUI Identifier: 00 00 00 00:24:23.656 Multi-path I/O 00:24:23.656 May have multiple subsystem ports: No 00:24:23.656 May have multiple controllers: No 00:24:23.656 Associated with SR-IOV VF: No 00:24:23.656 Max Data Transfer Size: Unlimited 00:24:23.656 Max Number of Namespaces: 0 00:24:23.656 Max Number of I/O Queues: 1024 00:24:23.656 NVMe Specification Version (VS): 1.3 00:24:23.656 NVMe Specification Version (Identify): 1.3 00:24:23.656 Maximum Queue Entries: 1024 00:24:23.656 Contiguous Queues Required: No 00:24:23.656 Arbitration Mechanisms Supported 00:24:23.656 Weighted Round Robin: Not Supported 00:24:23.656 Vendor Specific: Not Supported 00:24:23.656 Reset Timeout: 7500 ms 00:24:23.656 Doorbell Stride: 4 bytes 00:24:23.656 NVM Subsystem Reset: Not Supported 00:24:23.656 Command Sets Supported 00:24:23.656 NVM Command Set: Supported 00:24:23.656 Boot Partition: Not Supported 00:24:23.656 Memory Page Size Minimum: 4096 bytes 00:24:23.656 Memory Page Size Maximum: 4096 bytes 00:24:23.656 Persistent Memory Region: Not Supported 00:24:23.656 Optional Asynchronous Events Supported 00:24:23.656 Namespace Attribute Notices: Not Supported 00:24:23.656 Firmware Activation Notices: Not Supported 00:24:23.656 ANA Change Notices: Not Supported 00:24:23.656 PLE Aggregate Log Change Notices: Not Supported 00:24:23.656 LBA Status Info Alert Notices: Not Supported 00:24:23.656 EGE Aggregate Log Change Notices: Not Supported 00:24:23.656 Normal NVM Subsystem Shutdown event: Not Supported 00:24:23.656 Zone Descriptor Change Notices: Not Supported 00:24:23.656 Discovery Log Change Notices: Supported 00:24:23.656 Controller Attributes 00:24:23.656 128-bit Host Identifier: Not Supported 00:24:23.656 Non-Operational Permissive Mode: Not Supported 00:24:23.656 NVM Sets: Not Supported 00:24:23.656 Read Recovery Levels: Not Supported 00:24:23.656 Endurance Groups: Not Supported 00:24:23.656 Predictable Latency Mode: Not Supported 00:24:23.656 Traffic Based Keep ALive: Not Supported 00:24:23.656 Namespace Granularity: Not Supported 00:24:23.656 SQ Associations: Not Supported 00:24:23.656 UUID List: Not Supported 00:24:23.656 Multi-Domain Subsystem: Not Supported 00:24:23.656 Fixed Capacity Management: Not Supported 00:24:23.656 Variable Capacity Management: Not Supported 00:24:23.656 Delete Endurance Group: Not Supported 00:24:23.656 Delete NVM Set: Not Supported 00:24:23.656 Extended LBA Formats Supported: Not Supported 00:24:23.656 Flexible Data Placement Supported: Not Supported 00:24:23.656 00:24:23.656 Controller Memory Buffer Support 00:24:23.656 ================================ 00:24:23.656 Supported: No 00:24:23.656 00:24:23.656 Persistent Memory Region Support 00:24:23.656 ================================ 00:24:23.656 Supported: No 00:24:23.656 00:24:23.656 Admin Command Set Attributes 00:24:23.656 ============================ 00:24:23.656 Security Send/Receive: Not Supported 00:24:23.656 Format NVM: Not Supported 00:24:23.656 Firmware Activate/Download: Not Supported 00:24:23.656 Namespace Management: Not Supported 00:24:23.656 Device Self-Test: Not Supported 00:24:23.656 Directives: Not Supported 00:24:23.656 NVMe-MI: Not Supported 00:24:23.656 Virtualization Management: Not Supported 00:24:23.656 Doorbell Buffer Config: Not Supported 00:24:23.656 Get LBA Status Capability: Not Supported 00:24:23.656 Command & Feature Lockdown Capability: Not Supported 00:24:23.656 Abort Command Limit: 1 00:24:23.656 Async Event Request Limit: 1 00:24:23.656 Number of Firmware Slots: N/A 00:24:23.656 Firmware Slot 1 Read-Only: N/A 00:24:23.656 Firmware Activation Without Reset: N/A 00:24:23.656 Multiple Update Detection Support: N/A 00:24:23.656 Firmware Update Granularity: No Information Provided 00:24:23.656 Per-Namespace SMART Log: No 00:24:23.656 Asymmetric Namespace Access Log Page: Not Supported 00:24:23.656 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:23.656 Command Effects Log Page: Not Supported 00:24:23.656 Get Log Page Extended Data: Supported 00:24:23.656 Telemetry Log Pages: Not Supported 00:24:23.656 Persistent Event Log Pages: Not Supported 00:24:23.656 Supported Log Pages Log Page: May Support 00:24:23.656 Commands Supported & Effects Log Page: Not Supported 00:24:23.656 Feature Identifiers & Effects Log Page:May Support 00:24:23.656 NVMe-MI Commands & Effects Log Page: May Support 00:24:23.656 Data Area 4 for Telemetry Log: Not Supported 00:24:23.656 Error Log Page Entries Supported: 1 00:24:23.656 Keep Alive: Not Supported 00:24:23.656 00:24:23.656 NVM Command Set Attributes 00:24:23.656 ========================== 00:24:23.656 Submission Queue Entry Size 00:24:23.656 Max: 1 00:24:23.656 Min: 1 00:24:23.656 Completion Queue Entry Size 00:24:23.656 Max: 1 00:24:23.656 Min: 1 00:24:23.656 Number of Namespaces: 0 00:24:23.656 Compare Command: Not Supported 00:24:23.656 Write Uncorrectable Command: Not Supported 00:24:23.656 Dataset Management Command: Not Supported 00:24:23.656 Write Zeroes Command: Not Supported 00:24:23.656 Set Features Save Field: Not Supported 00:24:23.656 Reservations: Not Supported 00:24:23.656 Timestamp: Not Supported 00:24:23.656 Copy: Not Supported 00:24:23.656 Volatile Write Cache: Not Present 00:24:23.656 Atomic Write Unit (Normal): 1 00:24:23.656 Atomic Write Unit (PFail): 1 00:24:23.656 Atomic Compare & Write Unit: 1 00:24:23.656 Fused Compare & Write: Not Supported 00:24:23.656 Scatter-Gather List 00:24:23.656 SGL Command Set: Supported 00:24:23.656 SGL Keyed: Not Supported 00:24:23.656 SGL Bit Bucket Descriptor: Not Supported 00:24:23.656 SGL Metadata Pointer: Not Supported 00:24:23.656 Oversized SGL: Not Supported 00:24:23.656 SGL Metadata Address: Not Supported 00:24:23.656 SGL Offset: Supported 00:24:23.656 Transport SGL Data Block: Not Supported 00:24:23.656 Replay Protected Memory Block: Not Supported 00:24:23.656 00:24:23.656 Firmware Slot Information 00:24:23.656 ========================= 00:24:23.656 Active slot: 0 00:24:23.656 00:24:23.656 00:24:23.656 Error Log 00:24:23.656 ========= 00:24:23.656 00:24:23.656 Active Namespaces 00:24:23.656 ================= 00:24:23.656 Discovery Log Page 00:24:23.656 ================== 00:24:23.656 Generation Counter: 2 00:24:23.656 Number of Records: 2 00:24:23.656 Record Format: 0 00:24:23.656 00:24:23.656 Discovery Log Entry 0 00:24:23.656 ---------------------- 00:24:23.656 Transport Type: 3 (TCP) 00:24:23.657 Address Family: 1 (IPv4) 00:24:23.657 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:23.657 Entry Flags: 00:24:23.657 Duplicate Returned Information: 0 00:24:23.657 Explicit Persistent Connection Support for Discovery: 0 00:24:23.657 Transport Requirements: 00:24:23.657 Secure Channel: Not Specified 00:24:23.657 Port ID: 1 (0x0001) 00:24:23.657 Controller ID: 65535 (0xffff) 00:24:23.657 Admin Max SQ Size: 32 00:24:23.657 Transport Service Identifier: 4420 00:24:23.657 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:23.657 Transport Address: 10.0.0.1 00:24:23.657 Discovery Log Entry 1 00:24:23.657 ---------------------- 00:24:23.657 Transport Type: 3 (TCP) 00:24:23.657 Address Family: 1 (IPv4) 00:24:23.657 Subsystem Type: 2 (NVM Subsystem) 00:24:23.657 Entry Flags: 00:24:23.657 Duplicate Returned Information: 0 00:24:23.657 Explicit Persistent Connection Support for Discovery: 0 00:24:23.657 Transport Requirements: 00:24:23.657 Secure Channel: Not Specified 00:24:23.657 Port ID: 1 (0x0001) 00:24:23.657 Controller ID: 65535 (0xffff) 00:24:23.657 Admin Max SQ Size: 32 00:24:23.657 Transport Service Identifier: 4420 00:24:23.657 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:23.657 Transport Address: 10.0.0.1 00:24:23.657 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:23.917 get_feature(0x01) failed 00:24:23.917 get_feature(0x02) failed 00:24:23.917 get_feature(0x04) failed 00:24:23.917 ===================================================== 00:24:23.917 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:23.917 ===================================================== 00:24:23.917 Controller Capabilities/Features 00:24:23.917 ================================ 00:24:23.917 Vendor ID: 0000 00:24:23.917 Subsystem Vendor ID: 0000 00:24:23.917 Serial Number: e012658d4bac243a87bd 00:24:23.917 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:23.917 Firmware Version: 6.7.0-68 00:24:23.917 Recommended Arb Burst: 6 00:24:23.917 IEEE OUI Identifier: 00 00 00 00:24:23.917 Multi-path I/O 00:24:23.917 May have multiple subsystem ports: Yes 00:24:23.917 May have multiple controllers: Yes 00:24:23.917 Associated with SR-IOV VF: No 00:24:23.917 Max Data Transfer Size: Unlimited 00:24:23.917 Max Number of Namespaces: 1024 00:24:23.917 Max Number of I/O Queues: 128 00:24:23.917 NVMe Specification Version (VS): 1.3 00:24:23.917 NVMe Specification Version (Identify): 1.3 00:24:23.917 Maximum Queue Entries: 1024 00:24:23.917 Contiguous Queues Required: No 00:24:23.917 Arbitration Mechanisms Supported 00:24:23.917 Weighted Round Robin: Not Supported 00:24:23.917 Vendor Specific: Not Supported 00:24:23.917 Reset Timeout: 7500 ms 00:24:23.917 Doorbell Stride: 4 bytes 00:24:23.917 NVM Subsystem Reset: Not Supported 00:24:23.917 Command Sets Supported 00:24:23.917 NVM Command Set: Supported 00:24:23.917 Boot Partition: Not Supported 00:24:23.917 Memory Page Size Minimum: 4096 bytes 00:24:23.917 Memory Page Size Maximum: 4096 bytes 00:24:23.917 Persistent Memory Region: Not Supported 00:24:23.917 Optional Asynchronous Events Supported 00:24:23.917 Namespace Attribute Notices: Supported 00:24:23.917 Firmware Activation Notices: Not Supported 00:24:23.917 ANA Change Notices: Supported 00:24:23.917 PLE Aggregate Log Change Notices: Not Supported 00:24:23.917 LBA Status Info Alert Notices: Not Supported 00:24:23.917 EGE Aggregate Log Change Notices: Not Supported 00:24:23.917 Normal NVM Subsystem Shutdown event: Not Supported 00:24:23.917 Zone Descriptor Change Notices: Not Supported 00:24:23.917 Discovery Log Change Notices: Not Supported 00:24:23.917 Controller Attributes 00:24:23.917 128-bit Host Identifier: Supported 00:24:23.917 Non-Operational Permissive Mode: Not Supported 00:24:23.917 NVM Sets: Not Supported 00:24:23.917 Read Recovery Levels: Not Supported 00:24:23.917 Endurance Groups: Not Supported 00:24:23.917 Predictable Latency Mode: Not Supported 00:24:23.917 Traffic Based Keep ALive: Supported 00:24:23.917 Namespace Granularity: Not Supported 00:24:23.917 SQ Associations: Not Supported 00:24:23.917 UUID List: Not Supported 00:24:23.917 Multi-Domain Subsystem: Not Supported 00:24:23.917 Fixed Capacity Management: Not Supported 00:24:23.917 Variable Capacity Management: Not Supported 00:24:23.917 Delete Endurance Group: Not Supported 00:24:23.917 Delete NVM Set: Not Supported 00:24:23.917 Extended LBA Formats Supported: Not Supported 00:24:23.917 Flexible Data Placement Supported: Not Supported 00:24:23.917 00:24:23.917 Controller Memory Buffer Support 00:24:23.917 ================================ 00:24:23.917 Supported: No 00:24:23.917 00:24:23.917 Persistent Memory Region Support 00:24:23.917 ================================ 00:24:23.917 Supported: No 00:24:23.917 00:24:23.917 Admin Command Set Attributes 00:24:23.917 ============================ 00:24:23.917 Security Send/Receive: Not Supported 00:24:23.917 Format NVM: Not Supported 00:24:23.917 Firmware Activate/Download: Not Supported 00:24:23.917 Namespace Management: Not Supported 00:24:23.917 Device Self-Test: Not Supported 00:24:23.917 Directives: Not Supported 00:24:23.917 NVMe-MI: Not Supported 00:24:23.917 Virtualization Management: Not Supported 00:24:23.917 Doorbell Buffer Config: Not Supported 00:24:23.917 Get LBA Status Capability: Not Supported 00:24:23.917 Command & Feature Lockdown Capability: Not Supported 00:24:23.917 Abort Command Limit: 4 00:24:23.917 Async Event Request Limit: 4 00:24:23.917 Number of Firmware Slots: N/A 00:24:23.917 Firmware Slot 1 Read-Only: N/A 00:24:23.917 Firmware Activation Without Reset: N/A 00:24:23.918 Multiple Update Detection Support: N/A 00:24:23.918 Firmware Update Granularity: No Information Provided 00:24:23.918 Per-Namespace SMART Log: Yes 00:24:23.918 Asymmetric Namespace Access Log Page: Supported 00:24:23.918 ANA Transition Time : 10 sec 00:24:23.918 00:24:23.918 Asymmetric Namespace Access Capabilities 00:24:23.918 ANA Optimized State : Supported 00:24:23.918 ANA Non-Optimized State : Supported 00:24:23.918 ANA Inaccessible State : Supported 00:24:23.918 ANA Persistent Loss State : Supported 00:24:23.918 ANA Change State : Supported 00:24:23.918 ANAGRPID is not changed : No 00:24:23.918 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:23.918 00:24:23.918 ANA Group Identifier Maximum : 128 00:24:23.918 Number of ANA Group Identifiers : 128 00:24:23.918 Max Number of Allowed Namespaces : 1024 00:24:23.918 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:23.918 Command Effects Log Page: Supported 00:24:23.918 Get Log Page Extended Data: Supported 00:24:23.918 Telemetry Log Pages: Not Supported 00:24:23.918 Persistent Event Log Pages: Not Supported 00:24:23.918 Supported Log Pages Log Page: May Support 00:24:23.918 Commands Supported & Effects Log Page: Not Supported 00:24:23.918 Feature Identifiers & Effects Log Page:May Support 00:24:23.918 NVMe-MI Commands & Effects Log Page: May Support 00:24:23.918 Data Area 4 for Telemetry Log: Not Supported 00:24:23.918 Error Log Page Entries Supported: 128 00:24:23.918 Keep Alive: Supported 00:24:23.918 Keep Alive Granularity: 1000 ms 00:24:23.918 00:24:23.918 NVM Command Set Attributes 00:24:23.918 ========================== 00:24:23.918 Submission Queue Entry Size 00:24:23.918 Max: 64 00:24:23.918 Min: 64 00:24:23.918 Completion Queue Entry Size 00:24:23.918 Max: 16 00:24:23.918 Min: 16 00:24:23.918 Number of Namespaces: 1024 00:24:23.918 Compare Command: Not Supported 00:24:23.918 Write Uncorrectable Command: Not Supported 00:24:23.918 Dataset Management Command: Supported 00:24:23.918 Write Zeroes Command: Supported 00:24:23.918 Set Features Save Field: Not Supported 00:24:23.918 Reservations: Not Supported 00:24:23.918 Timestamp: Not Supported 00:24:23.918 Copy: Not Supported 00:24:23.918 Volatile Write Cache: Present 00:24:23.918 Atomic Write Unit (Normal): 1 00:24:23.918 Atomic Write Unit (PFail): 1 00:24:23.918 Atomic Compare & Write Unit: 1 00:24:23.918 Fused Compare & Write: Not Supported 00:24:23.918 Scatter-Gather List 00:24:23.918 SGL Command Set: Supported 00:24:23.918 SGL Keyed: Not Supported 00:24:23.918 SGL Bit Bucket Descriptor: Not Supported 00:24:23.918 SGL Metadata Pointer: Not Supported 00:24:23.918 Oversized SGL: Not Supported 00:24:23.918 SGL Metadata Address: Not Supported 00:24:23.918 SGL Offset: Supported 00:24:23.918 Transport SGL Data Block: Not Supported 00:24:23.918 Replay Protected Memory Block: Not Supported 00:24:23.918 00:24:23.918 Firmware Slot Information 00:24:23.918 ========================= 00:24:23.918 Active slot: 0 00:24:23.918 00:24:23.918 Asymmetric Namespace Access 00:24:23.918 =========================== 00:24:23.918 Change Count : 0 00:24:23.918 Number of ANA Group Descriptors : 1 00:24:23.918 ANA Group Descriptor : 0 00:24:23.918 ANA Group ID : 1 00:24:23.918 Number of NSID Values : 1 00:24:23.918 Change Count : 0 00:24:23.918 ANA State : 1 00:24:23.918 Namespace Identifier : 1 00:24:23.918 00:24:23.918 Commands Supported and Effects 00:24:23.918 ============================== 00:24:23.918 Admin Commands 00:24:23.918 -------------- 00:24:23.918 Get Log Page (02h): Supported 00:24:23.918 Identify (06h): Supported 00:24:23.918 Abort (08h): Supported 00:24:23.918 Set Features (09h): Supported 00:24:23.918 Get Features (0Ah): Supported 00:24:23.918 Asynchronous Event Request (0Ch): Supported 00:24:23.918 Keep Alive (18h): Supported 00:24:23.918 I/O Commands 00:24:23.918 ------------ 00:24:23.918 Flush (00h): Supported 00:24:23.918 Write (01h): Supported LBA-Change 00:24:23.918 Read (02h): Supported 00:24:23.918 Write Zeroes (08h): Supported LBA-Change 00:24:23.918 Dataset Management (09h): Supported 00:24:23.918 00:24:23.918 Error Log 00:24:23.918 ========= 00:24:23.918 Entry: 0 00:24:23.918 Error Count: 0x3 00:24:23.918 Submission Queue Id: 0x0 00:24:23.918 Command Id: 0x5 00:24:23.918 Phase Bit: 0 00:24:23.918 Status Code: 0x2 00:24:23.918 Status Code Type: 0x0 00:24:23.918 Do Not Retry: 1 00:24:23.918 Error Location: 0x28 00:24:23.918 LBA: 0x0 00:24:23.918 Namespace: 0x0 00:24:23.918 Vendor Log Page: 0x0 00:24:23.918 ----------- 00:24:23.918 Entry: 1 00:24:23.918 Error Count: 0x2 00:24:23.918 Submission Queue Id: 0x0 00:24:23.918 Command Id: 0x5 00:24:23.918 Phase Bit: 0 00:24:23.918 Status Code: 0x2 00:24:23.918 Status Code Type: 0x0 00:24:23.918 Do Not Retry: 1 00:24:23.918 Error Location: 0x28 00:24:23.918 LBA: 0x0 00:24:23.918 Namespace: 0x0 00:24:23.918 Vendor Log Page: 0x0 00:24:23.918 ----------- 00:24:23.918 Entry: 2 00:24:23.918 Error Count: 0x1 00:24:23.918 Submission Queue Id: 0x0 00:24:23.918 Command Id: 0x4 00:24:23.918 Phase Bit: 0 00:24:23.918 Status Code: 0x2 00:24:23.918 Status Code Type: 0x0 00:24:23.918 Do Not Retry: 1 00:24:23.918 Error Location: 0x28 00:24:23.918 LBA: 0x0 00:24:23.918 Namespace: 0x0 00:24:23.918 Vendor Log Page: 0x0 00:24:23.918 00:24:23.918 Number of Queues 00:24:23.918 ================ 00:24:23.918 Number of I/O Submission Queues: 128 00:24:23.918 Number of I/O Completion Queues: 128 00:24:23.918 00:24:23.918 ZNS Specific Controller Data 00:24:23.918 ============================ 00:24:23.918 Zone Append Size Limit: 0 00:24:23.918 00:24:23.918 00:24:23.918 Active Namespaces 00:24:23.918 ================= 00:24:23.918 get_feature(0x05) failed 00:24:23.918 Namespace ID:1 00:24:23.918 Command Set Identifier: NVM (00h) 00:24:23.918 Deallocate: Supported 00:24:23.918 Deallocated/Unwritten Error: Not Supported 00:24:23.918 Deallocated Read Value: Unknown 00:24:23.918 Deallocate in Write Zeroes: Not Supported 00:24:23.918 Deallocated Guard Field: 0xFFFF 00:24:23.918 Flush: Supported 00:24:23.918 Reservation: Not Supported 00:24:23.918 Namespace Sharing Capabilities: Multiple Controllers 00:24:23.918 Size (in LBAs): 1310720 (5GiB) 00:24:23.918 Capacity (in LBAs): 1310720 (5GiB) 00:24:23.918 Utilization (in LBAs): 1310720 (5GiB) 00:24:23.918 UUID: 09ca1c31-057d-4604-a5de-246b5f4593ce 00:24:23.918 Thin Provisioning: Not Supported 00:24:23.918 Per-NS Atomic Units: Yes 00:24:23.918 Atomic Boundary Size (Normal): 0 00:24:23.918 Atomic Boundary Size (PFail): 0 00:24:23.918 Atomic Boundary Offset: 0 00:24:23.918 NGUID/EUI64 Never Reused: No 00:24:23.918 ANA group ID: 1 00:24:23.918 Namespace Write Protected: No 00:24:23.918 Number of LBA Formats: 1 00:24:23.918 Current LBA Format: LBA Format #00 00:24:23.918 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:23.918 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.918 rmmod nvme_tcp 00:24:23.918 rmmod nvme_fabrics 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:23.918 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:23.919 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:24.186 14:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:24.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:25.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:25.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:25.086 ************************************ 00:24:25.086 END TEST nvmf_identify_kernel_target 00:24:25.086 ************************************ 00:24:25.086 00:24:25.086 real 0m3.385s 00:24:25.086 user 0m1.140s 00:24:25.086 sys 0m1.802s 00:24:25.086 14:02:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:25.086 14:02:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.086 14:02:23 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:25.086 14:02:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:25.086 14:02:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:25.086 14:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.086 ************************************ 00:24:25.086 START TEST nvmf_auth_host 00:24:25.086 ************************************ 00:24:25.086 14:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:25.344 * Looking for test storage... 00:24:25.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:25.344 Cannot find device "nvmf_tgt_br" 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.344 Cannot find device "nvmf_tgt_br2" 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:25.344 Cannot find device "nvmf_tgt_br" 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:25.344 Cannot find device "nvmf_tgt_br2" 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:25.344 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:25.601 14:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:25.601 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:25.602 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:25.858 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:25.858 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:25.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:24:25.858 00:24:25.858 --- 10.0.0.2 ping statistics --- 00:24:25.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.858 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:25.858 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:25.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:25.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:24:25.858 00:24:25.858 --- 10.0.0.3 ping statistics --- 00:24:25.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.858 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:25.858 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:25.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:25.858 00:24:25.859 --- 10.0.0.1 ping statistics --- 00:24:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.859 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77235 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77235 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 77235 ']' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:25.859 14:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=477a5b528130c1a5ccbce9520901d34f 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tme 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 477a5b528130c1a5ccbce9520901d34f 0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 477a5b528130c1a5ccbce9520901d34f 0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=477a5b528130c1a5ccbce9520901d34f 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tme 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tme 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tme 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e4c0851660b06ebe0f72cf4a575d8a53c25a663ff2fab71012bc0ee9bd77b6e 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KnK 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e4c0851660b06ebe0f72cf4a575d8a53c25a663ff2fab71012bc0ee9bd77b6e 3 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e4c0851660b06ebe0f72cf4a575d8a53c25a663ff2fab71012bc0ee9bd77b6e 3 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e4c0851660b06ebe0f72cf4a575d8a53c25a663ff2fab71012bc0ee9bd77b6e 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KnK 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KnK 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.KnK 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d971bba7e78b86f218a75bb6c274da424f8a81317eb5140 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9Az 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d971bba7e78b86f218a75bb6c274da424f8a81317eb5140 0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d971bba7e78b86f218a75bb6c274da424f8a81317eb5140 0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d971bba7e78b86f218a75bb6c274da424f8a81317eb5140 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:26.817 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9Az 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9Az 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9Az 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38adaf8c4d71034dbecd038f7491b10aebe16dc253e9c54c 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GGC 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38adaf8c4d71034dbecd038f7491b10aebe16dc253e9c54c 2 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38adaf8c4d71034dbecd038f7491b10aebe16dc253e9c54c 2 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38adaf8c4d71034dbecd038f7491b10aebe16dc253e9c54c 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GGC 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GGC 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GGC 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b8c61a464e69254aa1851d473a3c017a 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3op 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b8c61a464e69254aa1851d473a3c017a 1 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b8c61a464e69254aa1851d473a3c017a 1 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b8c61a464e69254aa1851d473a3c017a 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:27.075 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3op 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3op 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3op 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=244c53d5fa038a3e04cf645cf6509666 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZJR 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 244c53d5fa038a3e04cf645cf6509666 1 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 244c53d5fa038a3e04cf645cf6509666 1 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=244c53d5fa038a3e04cf645cf6509666 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZJR 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZJR 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZJR 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=00bc804749c85e1139619f607676ab20182c57720ded11ef 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0lK 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 00bc804749c85e1139619f607676ab20182c57720ded11ef 2 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 00bc804749c85e1139619f607676ab20182c57720ded11ef 2 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=00bc804749c85e1139619f607676ab20182c57720ded11ef 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:27.076 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0lK 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0lK 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0lK 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9fd21be2b698ed09be8d6c8bf370ecc5 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nZJ 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9fd21be2b698ed09be8d6c8bf370ecc5 0 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9fd21be2b698ed09be8d6c8bf370ecc5 0 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9fd21be2b698ed09be8d6c8bf370ecc5 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nZJ 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nZJ 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nZJ 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=578ba69a3023c9f245a4d6b88bea48b3f502973dc0a5ab47bc600b703e49c017 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.egp 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 578ba69a3023c9f245a4d6b88bea48b3f502973dc0a5ab47bc600b703e49c017 3 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 578ba69a3023c9f245a4d6b88bea48b3f502973dc0a5ab47bc600b703e49c017 3 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=578ba69a3023c9f245a4d6b88bea48b3f502973dc0a5ab47bc600b703e49c017 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.egp 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.egp 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.egp 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77235 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 77235 ']' 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:27.335 14:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tme 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.KnK ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KnK 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9Az 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GGC ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GGC 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3op 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZJR ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZJR 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0lK 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nZJ ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nZJ 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.egp 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.594 14:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:27.595 14:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.164 Waiting for block devices as requested 00:24:28.164 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.424 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:29.384 No valid GPT data, bailing 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:29.384 No valid GPT data, bailing 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:29.384 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:29.385 No valid GPT data, bailing 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:29.385 No valid GPT data, bailing 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:29.385 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -a 10.0.0.1 -t tcp -s 4420 00:24:29.643 00:24:29.643 Discovery Log Number of Records 2, Generation counter 2 00:24:29.643 =====Discovery Log Entry 0====== 00:24:29.643 trtype: tcp 00:24:29.643 adrfam: ipv4 00:24:29.643 subtype: current discovery subsystem 00:24:29.643 treq: not specified, sq flow control disable supported 00:24:29.643 portid: 1 00:24:29.643 trsvcid: 4420 00:24:29.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:29.643 traddr: 10.0.0.1 00:24:29.643 eflags: none 00:24:29.643 sectype: none 00:24:29.643 =====Discovery Log Entry 1====== 00:24:29.643 trtype: tcp 00:24:29.643 adrfam: ipv4 00:24:29.643 subtype: nvme subsystem 00:24:29.643 treq: not specified, sq flow control disable supported 00:24:29.643 portid: 1 00:24:29.643 trsvcid: 4420 00:24:29.643 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:29.643 traddr: 10.0.0.1 00:24:29.643 eflags: none 00:24:29.643 sectype: none 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.643 14:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.643 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 nvme0n1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 nvme0n1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.902 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.160 nvme0n1 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.160 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.161 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 nvme0n1 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 nvme0n1 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.421 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.422 14:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.680 nvme0n1 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.680 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.938 nvme0n1 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.938 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.197 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.198 nvme0n1 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.198 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.456 nvme0n1 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.456 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.457 14:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 nvme0n1 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 nvme0n1 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.716 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.282 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.540 nvme0n1 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.540 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.541 14:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.800 nvme0n1 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.800 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.059 nvme0n1 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.059 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.318 nvme0n1 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.318 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.579 nvme0n1 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.579 14:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.986 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.987 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.987 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.247 nvme0n1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.247 14:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.508 nvme0n1 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.508 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.769 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 nvme0n1 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.029 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.289 nvme0n1 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.289 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.548 14:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.807 nvme0n1 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.807 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.808 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.376 nvme0n1 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.376 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.377 14:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.945 nvme0n1 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:37.945 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.946 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.514 nvme0n1 00:24:38.514 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.514 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.514 14:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.514 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.514 14:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.514 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.515 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.083 nvme0n1 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.083 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.343 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.344 14:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 nvme0n1 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 nvme0n1 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.912 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.913 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.172 nvme0n1 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.172 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.173 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 nvme0n1 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 nvme0n1 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.432 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.690 14:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.690 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.691 nvme0n1 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.691 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 nvme0n1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 nvme0n1 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.949 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 nvme0n1 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.209 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.468 nvme0n1 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:41.468 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.469 14:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.753 nvme0n1 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.753 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.754 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.013 nvme0n1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.013 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.272 nvme0n1 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.272 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.531 nvme0n1 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.531 14:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.790 nvme0n1 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.790 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 nvme0n1 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.049 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 nvme0n1 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.309 14:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 nvme0n1 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 nvme0n1 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 nvme0n1 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.653 14:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.913 nvme0n1 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.913 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.482 nvme0n1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.482 14:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.051 nvme0n1 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.051 14:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 nvme0n1 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.620 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.189 nvme0n1 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.189 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.449 14:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.450 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.450 14:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.708 nvme0n1 00:24:47.708 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.708 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.708 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.708 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.708 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 nvme0n1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.968 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.228 nvme0n1 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:48.228 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.229 nvme0n1 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.229 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 nvme0n1 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.489 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:48.490 14:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.490 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 nvme0n1 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 nvme0n1 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.749 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 nvme0n1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.009 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.270 nvme0n1 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.270 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 nvme0n1 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 nvme0n1 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.530 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 nvme0n1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.791 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.051 nvme0n1 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.051 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 nvme0n1 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.312 14:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 nvme0n1 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.573 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.834 nvme0n1 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.834 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.403 nvme0n1 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.403 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.404 14:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.664 nvme0n1 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.664 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.923 nvme0n1 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.923 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:52.182 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.183 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 nvme0n1 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.442 14:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.702 nvme0n1 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc3YTViNTI4MTMwYzFhNWNjYmNlOTUyMDkwMWQzNGYhHt/l: 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: ]] 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU0YzA4NTE2NjBiMDZlYmUwZjcyY2Y0YTU3NWQ4YTUzYzI1YTY2M2ZmMmZhYjcxMDEyYmMwZWU5YmQ3N2I2ZQhbc9A=: 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.702 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.961 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.223 nvme0n1 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:53.482 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.483 14:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.050 nvme0n1 00:24:54.050 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.050 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.050 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.050 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.050 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhjNjFhNDY0ZTY5MjU0YWExODUxZDQ3M2EzYzAxN2HdEXyL: 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ0YzUzZDVmYTAzOGEzZTA0Y2Y2NDVjZjY1MDk2NjamPBYI: 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.051 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.617 nvme0n1 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.617 14:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBiYzgwNDc0OWM4NWUxMTM5NjE5ZjYwNzY3NmFiMjAxODJjNTc3MjBkZWQxMWVm/nhOfw==: 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWZkMjFiZTJiNjk4ZWQwOWJlOGQ2YzhiZjM3MGVjYzV4Aqdy: 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.617 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.202 nvme0n1 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTc4YmE2OWEzMDIzYzlmMjQ1YTRkNmI4OGJlYTQ4YjNmNTAyOTczZGMwYTVhYjQ3YmM2MDBiNzAzZTQ5YzAxN3CUZfI=: 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.202 14:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.784 nvme0n1 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ5NzFiYmE3ZTc4Yjg2ZjIxOGE3NWJiNmMyNzRkYTQyNGY4YTgxMzE3ZWI1MTQwTXJjOQ==: 00:24:55.784 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZGFmOGM0ZDcxMDM0ZGJlY2QwMzhmNzQ5MWIxMGFlYmUxNmRjMjUzZTljNTRjedseIg==: 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.785 request: 00:24:55.785 { 00:24:55.785 "name": "nvme0", 00:24:55.785 "trtype": "tcp", 00:24:55.785 "traddr": "10.0.0.1", 00:24:55.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:55.785 "adrfam": "ipv4", 00:24:55.785 "trsvcid": "4420", 00:24:55.785 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:55.785 "method": "bdev_nvme_attach_controller", 00:24:55.785 "req_id": 1 00:24:55.785 } 00:24:55.785 Got JSON-RPC error response 00:24:55.785 response: 00:24:55.785 { 00:24:55.785 "code": -32602, 00:24:55.785 "message": "Invalid parameters" 00:24:55.785 } 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.785 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.044 request: 00:24:56.044 { 00:24:56.044 "name": "nvme0", 00:24:56.044 "trtype": "tcp", 00:24:56.044 "traddr": "10.0.0.1", 00:24:56.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:56.044 "adrfam": "ipv4", 00:24:56.044 "trsvcid": "4420", 00:24:56.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:56.044 "dhchap_key": "key2", 00:24:56.044 "method": "bdev_nvme_attach_controller", 00:24:56.044 "req_id": 1 00:24:56.044 } 00:24:56.044 Got JSON-RPC error response 00:24:56.044 response: 00:24:56.044 { 00:24:56.044 "code": -32602, 00:24:56.044 "message": "Invalid parameters" 00:24:56.044 } 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.044 request: 00:24:56.044 { 00:24:56.044 "name": "nvme0", 00:24:56.044 "trtype": "tcp", 00:24:56.044 "traddr": "10.0.0.1", 00:24:56.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:56.044 "adrfam": "ipv4", 00:24:56.044 "trsvcid": "4420", 00:24:56.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:56.044 "dhchap_key": "key1", 00:24:56.044 "dhchap_ctrlr_key": "ckey2", 00:24:56.044 "method": "bdev_nvme_attach_controller", 00:24:56.044 "req_id": 1 00:24:56.044 } 00:24:56.044 Got JSON-RPC error response 00:24:56.044 response: 00:24:56.044 { 00:24:56.044 "code": -32602, 00:24:56.044 "message": "Invalid parameters" 00:24:56.044 } 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.044 rmmod nvme_tcp 00:24:56.044 rmmod nvme_fabrics 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77235 ']' 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77235 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 77235 ']' 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 77235 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77235 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77235' 00:24:56.044 killing process with pid 77235 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 77235 00:24:56.044 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 77235 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:56.611 14:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:56.611 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:56.611 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:56.611 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:56.612 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:56.612 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:56.612 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:56.612 14:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:57.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:57.554 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.554 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.812 14:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tme /tmp/spdk.key-null.9Az /tmp/spdk.key-sha256.3op /tmp/spdk.key-sha384.0lK /tmp/spdk.key-sha512.egp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:57.812 14:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:58.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:58.329 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:58.329 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:58.329 ************************************ 00:24:58.329 END TEST nvmf_auth_host 00:24:58.329 ************************************ 00:24:58.329 00:24:58.329 real 0m33.112s 00:24:58.329 user 0m29.965s 00:24:58.329 sys 0m4.920s 00:24:58.329 14:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:58.329 14:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.329 14:02:56 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:24:58.329 14:02:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:58.329 14:02:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:58.329 14:02:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:58.329 14:02:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.329 ************************************ 00:24:58.329 START TEST nvmf_digest 00:24:58.329 ************************************ 00:24:58.329 14:02:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:58.329 * Looking for test storage... 00:24:58.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:58.329 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.589 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:58.590 Cannot find device "nvmf_tgt_br" 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.590 Cannot find device "nvmf_tgt_br2" 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:58.590 14:02:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:58.590 Cannot find device "nvmf_tgt_br" 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:58.590 Cannot find device "nvmf_tgt_br2" 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.590 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:58.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:58.850 00:24:58.850 --- 10.0.0.2 ping statistics --- 00:24:58.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.850 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:58.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:24:58.850 00:24:58.850 --- 10.0.0.3 ping statistics --- 00:24:58.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.850 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:58.850 00:24:58.850 --- 10.0.0.1 ping statistics --- 00:24:58.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.850 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.850 14:02:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:59.109 ************************************ 00:24:59.109 START TEST nvmf_digest_clean 00:24:59.109 ************************************ 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=78783 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 78783 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 78783 ']' 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.109 14:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:59.109 [2024-05-15 14:02:57.492517] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:24:59.109 [2024-05-15 14:02:57.492584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.109 [2024-05-15 14:02:57.626498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.370 [2024-05-15 14:02:57.778100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.370 [2024-05-15 14:02:57.778169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.370 [2024-05-15 14:02:57.778180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.370 [2024-05-15 14:02:57.778190] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.370 [2024-05-15 14:02:57.778198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.370 [2024-05-15 14:02:57.778228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.942 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.202 null0 00:25:00.202 [2024-05-15 14:02:58.556524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.202 [2024-05-15 14:02:58.580397] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:00.202 [2024-05-15 14:02:58.580895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78815 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78815 /var/tmp/bperf.sock 00:25:00.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 78815 ']' 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.202 14:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.202 [2024-05-15 14:02:58.639493] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:00.202 [2024-05-15 14:02:58.639791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78815 ] 00:25:00.460 [2024-05-15 14:02:58.780184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.460 [2024-05-15 14:02:58.880892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.028 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:01.028 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:01.028 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:01.028 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:01.028 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.286 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.286 14:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.545 nvme0n1 00:25:01.545 14:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:01.545 14:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.545 Running I/O for 2 seconds... 00:25:04.073 00:25:04.073 Latency(us) 00:25:04.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:04.073 nvme0n1 : 2.01 18941.92 73.99 0.00 0.00 6753.34 2263.49 21687.42 00:25:04.073 =================================================================================================================== 00:25:04.073 Total : 18941.92 73.99 0.00 0.00 6753.34 2263.49 21687.42 00:25:04.073 0 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:04.073 | select(.opcode=="crc32c") 00:25:04.073 | "\(.module_name) \(.executed)"' 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78815 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 78815 ']' 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 78815 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78815 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78815' 00:25:04.073 killing process with pid 78815 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 78815 00:25:04.073 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.073 00:25:04.073 Latency(us) 00:25:04.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.073 =================================================================================================================== 00:25:04.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 78815 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78877 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78877 /var/tmp/bperf.sock 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 78877 ']' 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:04.073 14:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:04.331 [2024-05-15 14:03:02.652812] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:04.331 [2024-05-15 14:03:02.653063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.331 Zero copy mechanism will not be used. 00:25:04.331 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78877 ] 00:25:04.331 [2024-05-15 14:03:02.788339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.331 [2024-05-15 14:03:02.889281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.264 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:05.264 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:05.264 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:05.264 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:05.264 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:05.523 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.523 14:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.523 nvme0n1 00:25:05.782 14:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.782 14:03:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.782 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.782 Zero copy mechanism will not be used. 00:25:05.782 Running I/O for 2 seconds... 00:25:07.688 00:25:07.688 Latency(us) 00:25:07.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.688 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:07.688 nvme0n1 : 2.00 8844.94 1105.62 0.00 0.00 1806.20 1671.30 4263.79 00:25:07.688 =================================================================================================================== 00:25:07.688 Total : 8844.94 1105.62 0.00 0.00 1806.20 1671.30 4263.79 00:25:07.688 0 00:25:07.688 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.688 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:07.688 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.688 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.688 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.689 | select(.opcode=="crc32c") 00:25:07.689 | "\(.module_name) \(.executed)"' 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78877 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 78877 ']' 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 78877 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78877 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:07.949 killing process with pid 78877 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78877' 00:25:07.949 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.949 00:25:07.949 Latency(us) 00:25:07.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.949 =================================================================================================================== 00:25:07.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 78877 00:25:07.949 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 78877 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78932 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78932 /var/tmp/bperf.sock 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 78932 ']' 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.208 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.209 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:08.209 14:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.209 [2024-05-15 14:03:06.698422] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:08.209 [2024-05-15 14:03:06.698501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78932 ] 00:25:08.469 [2024-05-15 14:03:06.835155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.469 [2024-05-15 14:03:06.936000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.039 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:09.039 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:09.039 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:09.039 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.039 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:09.300 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.300 14:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.559 nvme0n1 00:25:09.559 14:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.559 14:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.818 Running I/O for 2 seconds... 00:25:11.724 00:25:11.724 Latency(us) 00:25:11.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.724 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:11.724 nvme0n1 : 2.00 20751.25 81.06 0.00 0.00 6163.37 5606.09 13580.95 00:25:11.724 =================================================================================================================== 00:25:11.725 Total : 20751.25 81.06 0.00 0.00 6163.37 5606.09 13580.95 00:25:11.725 0 00:25:11.725 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:11.725 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:11.725 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:11.725 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:11.725 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:11.725 | select(.opcode=="crc32c") 00:25:11.725 | "\(.module_name) \(.executed)"' 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78932 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 78932 ']' 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 78932 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78932 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78932' 00:25:11.998 killing process with pid 78932 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 78932 00:25:11.998 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.998 00:25:11.998 Latency(us) 00:25:11.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.998 =================================================================================================================== 00:25:11.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.998 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 78932 00:25:12.256 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:12.256 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:12.256 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78992 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78992 /var/tmp/bperf.sock 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 78992 ']' 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:12.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:12.257 14:03:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:12.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:12.257 Zero copy mechanism will not be used. 00:25:12.257 [2024-05-15 14:03:10.728525] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:12.257 [2024-05-15 14:03:10.728601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78992 ] 00:25:12.516 [2024-05-15 14:03:10.870308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.516 [2024-05-15 14:03:10.972390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.084 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.084 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:13.084 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:13.084 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.084 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.387 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.387 14:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.654 nvme0n1 00:25:13.654 14:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.654 14:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.654 Zero copy mechanism will not be used. 00:25:13.654 Running I/O for 2 seconds... 00:25:16.191 00:25:16.191 Latency(us) 00:25:16.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.191 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:16.191 nvme0n1 : 2.00 8653.81 1081.73 0.00 0.00 1845.18 1315.98 10948.99 00:25:16.191 =================================================================================================================== 00:25:16.191 Total : 8653.81 1081.73 0.00 0.00 1845.18 1315.98 10948.99 00:25:16.191 0 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:16.191 | select(.opcode=="crc32c") 00:25:16.191 | "\(.module_name) \(.executed)"' 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:16.191 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78992 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 78992 ']' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 78992 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78992 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:16.192 killing process with pid 78992 00:25:16.192 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.192 00:25:16.192 Latency(us) 00:25:16.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.192 =================================================================================================================== 00:25:16.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78992' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 78992 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 78992 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 78783 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 78783 ']' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 78783 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78783 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:16.192 killing process with pid 78783 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78783' 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 78783 00:25:16.192 [2024-05-15 14:03:14.718820] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:16.192 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 78783 00:25:16.451 00:25:16.451 real 0m17.499s 00:25:16.451 user 0m32.097s 00:25:16.451 sys 0m5.389s 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 ************************************ 00:25:16.451 END TEST nvmf_digest_clean 00:25:16.451 ************************************ 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 ************************************ 00:25:16.451 START TEST nvmf_digest_error 00:25:16.451 ************************************ 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:16.451 14:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79074 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79074 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 79074 ']' 00:25:16.451 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.452 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:16.452 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.452 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:16.452 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.452 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:16.710 [2024-05-15 14:03:15.064460] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:16.710 [2024-05-15 14:03:15.064536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.710 [2024-05-15 14:03:15.191726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.968 [2024-05-15 14:03:15.292808] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.969 [2024-05-15 14:03:15.292858] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.969 [2024-05-15 14:03:15.292868] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.969 [2024-05-15 14:03:15.292877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.969 [2024-05-15 14:03:15.292883] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.969 [2024-05-15 14:03:15.292906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.536 [2024-05-15 14:03:15.956293] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.536 14:03:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.536 null0 00:25:17.536 [2024-05-15 14:03:16.053271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.536 [2024-05-15 14:03:16.077178] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:17.536 [2024-05-15 14:03:16.077409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79106 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79106 /var/tmp/bperf.sock 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 79106 ']' 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:17.536 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.795 [2024-05-15 14:03:16.133554] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:17.795 [2024-05-15 14:03:16.133796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79106 ] 00:25:17.795 [2024-05-15 14:03:16.274954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.054 [2024-05-15 14:03:16.380884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.621 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:18.621 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:18.621 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.621 14:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.621 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.879 nvme0n1 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:18.879 14:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.138 Running I/O for 2 seconds... 00:25:19.138 [2024-05-15 14:03:17.512306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.512361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.512374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.525443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.525482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.525494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.538513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.538551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.538563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.551574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.551609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.564796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.564829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.564840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.578001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.578033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.578044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.591035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.591064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.591075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.604066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.604098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.138 [2024-05-15 14:03:17.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.138 [2024-05-15 14:03:17.617117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.138 [2024-05-15 14:03:17.617151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.617162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.630174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.630207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.643378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.643411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.643422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.656647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.656688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.656699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.669763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.669807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.669819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.683184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.683230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.683241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.139 [2024-05-15 14:03:17.696314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.139 [2024-05-15 14:03:17.696362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.139 [2024-05-15 14:03:17.696373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.398 [2024-05-15 14:03:17.709730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.398 [2024-05-15 14:03:17.709790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.398 [2024-05-15 14:03:17.709802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.398 [2024-05-15 14:03:17.723149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.398 [2024-05-15 14:03:17.723193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.398 [2024-05-15 14:03:17.723205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.398 [2024-05-15 14:03:17.736342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.398 [2024-05-15 14:03:17.736387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.398 [2024-05-15 14:03:17.736398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.398 [2024-05-15 14:03:17.749677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.398 [2024-05-15 14:03:17.749720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.398 [2024-05-15 14:03:17.749731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.398 [2024-05-15 14:03:17.762853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.398 [2024-05-15 14:03:17.762895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.398 [2024-05-15 14:03:17.762907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.776123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.776178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.789776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.789822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.789834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.803104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.803153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.816354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.816392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.816403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.829646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.829685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.829696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.843075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.843116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.843128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.856302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.856342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.856353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.869527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.869562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.882572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.882609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.882621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.895849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.895885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.895896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.908990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.909028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.909039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.922338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.922384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.922396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.935538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.935581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.935593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.399 [2024-05-15 14:03:17.948904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.399 [2024-05-15 14:03:17.948943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.399 [2024-05-15 14:03:17.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:17.962841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:17.962882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:17.962894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:17.976250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:17.976287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:17.976298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:17.990370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:17.990408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:17.990420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.004121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.004156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.018171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.018207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.018219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.031619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.031655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.031666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.045171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.045207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.045219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.058850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.058886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.058897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.072107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.072138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.072149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.085269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.085299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.085310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.098881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.098915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.098926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.111915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.111955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.125040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.125073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.125084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.138217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.138248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.151423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.151456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.151467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.164563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.164592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.164602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.177805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.177836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.177846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.190967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.190999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.191010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.204059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.662 [2024-05-15 14:03:18.204093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-05-15 14:03:18.204105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.662 [2024-05-15 14:03:18.217143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.663 [2024-05-15 14:03:18.217174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-05-15 14:03:18.217185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.230879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.931 [2024-05-15 14:03:18.230912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.931 [2024-05-15 14:03:18.230923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.244047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.931 [2024-05-15 14:03:18.244078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.931 [2024-05-15 14:03:18.244089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.257401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.931 [2024-05-15 14:03:18.257436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.931 [2024-05-15 14:03:18.257447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.271313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.931 [2024-05-15 14:03:18.271347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.931 [2024-05-15 14:03:18.271357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.284606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.931 [2024-05-15 14:03:18.284637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.931 [2024-05-15 14:03:18.284648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.931 [2024-05-15 14:03:18.297890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.297921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.297932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.311491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.311525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.311536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.324627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.324661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.324672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.337778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.337819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.356987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.357018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.357029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.370106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.370137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.370147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.383331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.383365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.383376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.396623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.396658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.396669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.410872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.410910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.424090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.424127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.424138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.437308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.437355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.437384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.450796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.450831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.450843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.464152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.464189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.932 [2024-05-15 14:03:18.477788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:19.932 [2024-05-15 14:03:18.477821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.932 [2024-05-15 14:03:18.477833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.191 [2024-05-15 14:03:18.491390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.191 [2024-05-15 14:03:18.491425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.191 [2024-05-15 14:03:18.491437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.191 [2024-05-15 14:03:18.505558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.191 [2024-05-15 14:03:18.505595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.191 [2024-05-15 14:03:18.505607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.191 [2024-05-15 14:03:18.519018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.191 [2024-05-15 14:03:18.519050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.191 [2024-05-15 14:03:18.519061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.532318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.532350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.532361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.545610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.545644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.545655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.559373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.559406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.559417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.573619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.573653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.573665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.596979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.597012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.597024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.616363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.616398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.616410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.631060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.631094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.631105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.645148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.645190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.645206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.659440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.659497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.673724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.673772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.673784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.687865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.687901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.687914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.702027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.702063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.702076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.716152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.716190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.716202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.730345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.730402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.192 [2024-05-15 14:03:18.744540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.192 [2024-05-15 14:03:18.744580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.192 [2024-05-15 14:03:18.744596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.758490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.758524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.758536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.772561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.772594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.772606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.786753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.786784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.786797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.800801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.800833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.800844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.815007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.815039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.815051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.829189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.829228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.829245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.843412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.843450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.843466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.857632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.857666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.857679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.871815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.871857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.886019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.886053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.886065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.900157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.900190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.900201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.914306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.914337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.914349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.928458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.928491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.928503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.942679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.942712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.942724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.956764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.956797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.956809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.970913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.970948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.970961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.985054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.985089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.450 [2024-05-15 14:03:18.985104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.450 [2024-05-15 14:03:18.999299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.450 [2024-05-15 14:03:18.999334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.451 [2024-05-15 14:03:18.999346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.013508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.013544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.013556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.027604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.027648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.041756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.041789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.055946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.055979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.055991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.070060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.070102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.084150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.084181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.084193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.098341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.098374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.098388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.112420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.112452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.112464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.126555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.126589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.126601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.140728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.140773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.140785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.154921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.154973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.169228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.169269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.169284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.183420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.183457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.183469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.197672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.197713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.197727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.211869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.211908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.211920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.226081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.226121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.240307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.240346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.254579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.254621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.254633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.710 [2024-05-15 14:03:19.268764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.710 [2024-05-15 14:03:19.268810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.710 [2024-05-15 14:03:19.268822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.968 [2024-05-15 14:03:19.282990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.968 [2024-05-15 14:03:19.283030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.968 [2024-05-15 14:03:19.283043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.968 [2024-05-15 14:03:19.297181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.968 [2024-05-15 14:03:19.297222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.968 [2024-05-15 14:03:19.297237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.968 [2024-05-15 14:03:19.311420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.968 [2024-05-15 14:03:19.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.968 [2024-05-15 14:03:19.311472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.968 [2024-05-15 14:03:19.325635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.968 [2024-05-15 14:03:19.325695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.968 [2024-05-15 14:03:19.325710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.968 [2024-05-15 14:03:19.339907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.968 [2024-05-15 14:03:19.339970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.339983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.360319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.360382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.360397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.374581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.374640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.388897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.388956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.388969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.403290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.403351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.403365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.417635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.417691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.417705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.431861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.431916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.431930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.446173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.446241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.446256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.460500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.460563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.460578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.474855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.474909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.474921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 [2024-05-15 14:03:19.489085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cd930) 00:25:20.969 [2024-05-15 14:03:19.489142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.969 [2024-05-15 14:03:19.489155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.969 00:25:20.969 Latency(us) 00:25:20.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.969 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:20.969 nvme0n1 : 2.01 18282.58 71.42 0.00 0.00 6996.29 2368.77 29899.16 00:25:20.969 =================================================================================================================== 00:25:20.969 Total : 18282.58 71.42 0.00 0.00 6996.29 2368.77 29899.16 00:25:20.969 0 00:25:21.228 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.228 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.228 | .driver_specific 00:25:21.228 | .nvme_error 00:25:21.228 | .status_code 00:25:21.228 | .command_transient_transport_error' 00:25:21.228 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.228 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79106 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 79106 ']' 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 79106 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79106 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:21.487 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:21.487 killing process with pid 79106 00:25:21.488 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79106' 00:25:21.488 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 79106 00:25:21.488 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.488 00:25:21.488 Latency(us) 00:25:21.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.488 =================================================================================================================== 00:25:21.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.488 14:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 79106 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79162 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79162 /var/tmp/bperf.sock 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 79162 ']' 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.746 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.747 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.747 14:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.747 Zero copy mechanism will not be used. 00:25:21.747 [2024-05-15 14:03:20.117090] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:21.747 [2024-05-15 14:03:20.117178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79162 ] 00:25:21.747 [2024-05-15 14:03:20.253470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.006 [2024-05-15 14:03:20.361754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.573 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.573 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:22.573 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.573 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.876 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.135 nvme0n1 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:23.135 14:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.135 Zero copy mechanism will not be used. 00:25:23.135 Running I/O for 2 seconds... 00:25:23.135 [2024-05-15 14:03:21.638710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.638775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.638790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.644253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.644299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.644316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.649843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.649876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.649888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.655408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.655443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.655456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.660873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.660906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.660918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.666311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.666346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.666358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.671866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.671899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.671911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.677339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.677373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.677384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.682881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.682916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.682928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.688436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.135 [2024-05-15 14:03:21.694073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.135 [2024-05-15 14:03:21.694108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.135 [2024-05-15 14:03:21.694120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.699597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.699646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.705152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.705200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.710699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.710751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.716312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.716346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.716358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.721789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.721820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.721831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.727199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.727232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.727243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.732699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.732743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.732756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.738256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.738290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.738301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.743746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.743778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.743790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.749200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.749238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.749258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.754618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.754664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.759987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.760028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.765340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.765373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.765386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.770758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.770791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.776165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.776212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.781590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.781629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.781641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.787110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.787144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.787156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.792645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.792680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.792692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.798213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.396 [2024-05-15 14:03:21.798247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.396 [2024-05-15 14:03:21.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.396 [2024-05-15 14:03:21.803575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.803609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.809053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.809087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.809099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.814507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.814540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.814552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.819999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.820033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.820044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.825505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.825539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.825551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.831026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.831071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.836564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.836597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.836610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.842049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.842083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.842095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.847599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.847633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.847645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.853106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.853142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.853154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.858621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.858656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.858668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.864142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.864177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.864189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.869649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.869683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.869695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.875168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.875203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.875215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.880551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.880585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.880598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.885984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.886017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.886029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.891336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.891387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.896757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.896789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.896801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.902112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.902145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.902157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.907476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.907510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.907522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.912895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.912928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.918332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.918367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.923676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.923723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.929123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.929159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.929173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.934509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.934549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.934568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.939975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.940009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.940021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.945537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.945571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.945583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.397 [2024-05-15 14:03:21.951139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.397 [2024-05-15 14:03:21.951179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.397 [2024-05-15 14:03:21.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.956535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.956569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.956581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.961945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.961977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.961989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.967408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.967449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.967465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.972910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.972941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.972952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.978288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.978322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.978333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.983667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.983700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.983712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.989168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.989204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.989218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:21.994614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:21.994647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.658 [2024-05-15 14:03:21.994659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.658 [2024-05-15 14:03:22.000004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.658 [2024-05-15 14:03:22.000039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.000053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.005314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.005363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.010806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.016229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.016274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.021639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.021674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.021686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.027088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.027128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.027140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.032509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.032552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.032564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.037937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.037969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.037980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.043377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.043411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.043423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.048777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.048819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.054216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.054250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.054261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.059652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.059688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.059699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.065077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.065112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.065127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.070475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.070509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.070521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.075946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.075984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.081368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.081415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.086600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.086633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.086645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.092158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.092194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.092211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.097599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.097649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.103066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.103100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.103112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.108489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.108520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.108531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.113930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.113963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.113975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.119344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.119376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.119388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.124772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.124802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.124814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.130261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.130294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.130306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.135668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.135701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.135713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.141113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.141150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.659 [2024-05-15 14:03:22.141164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.659 [2024-05-15 14:03:22.146547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.659 [2024-05-15 14:03:22.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.146594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.152162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.152196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.152207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.157569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.157603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.157614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.162972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.163006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.163018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.168459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.168500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.168517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.173868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.173900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.173912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.179232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.184653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.184687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.184699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.190128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.190161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.190173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.195527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.195560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.195572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.200839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.200871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.200883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.206246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.206281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.206292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.211575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.211621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.660 [2024-05-15 14:03:22.217043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.660 [2024-05-15 14:03:22.217076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.660 [2024-05-15 14:03:22.217088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.222479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.222512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.222524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.228103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.228136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.233580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.233615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.233627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.239215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.239249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.239261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.244795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.244826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.244838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.250271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.250316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.255705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.255753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.255765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.261176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.261210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.261221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.266619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.266665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.272031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.272077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.277394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.277426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.277438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.282823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.920 [2024-05-15 14:03:22.282855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.920 [2024-05-15 14:03:22.282866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.920 [2024-05-15 14:03:22.288210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.288245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.288259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.293639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.293673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.293685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.299069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.299105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.299119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.304528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.304564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.304576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.309926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.315332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.315365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.315377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.320687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.326131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.326170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.326184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.331502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.331536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.331548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.337012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.337047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.337061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.342560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.342595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.342607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.348119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.348154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.348165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.353603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.353639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.359063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.359099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.359112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.364536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.364589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.370030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.370066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.370077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.375408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.375441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.375453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.380680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.380714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.380726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.386130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.386164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.386177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.391462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.391495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.391506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.396993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.397027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.402353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.402400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.407671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.407702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.407713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.413023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.413057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.413069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.418409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.418445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.418457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.423785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.423815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.423827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.429146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.429182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.921 [2024-05-15 14:03:22.429196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.921 [2024-05-15 14:03:22.434555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.921 [2024-05-15 14:03:22.434587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.434600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.439904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.439934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.439946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.445228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.445262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.445276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.450649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.450684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.450696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.456015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.456048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.456059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.461438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.461472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.461483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.467000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.467034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.467046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.472463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.472496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.472507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.922 [2024-05-15 14:03:22.477903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:23.922 [2024-05-15 14:03:22.477935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.922 [2024-05-15 14:03:22.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.483357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.483391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.483403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.488760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.488791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.488803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.494718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.494762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.494775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.500109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.500154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.500172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.505427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.505465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.505480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.510759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.510805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.516261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.516301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.516318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.521617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.521654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.521666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.527025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.527062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.527074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.532436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.532472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.532484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.537874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.537905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.537917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.543245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.543281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.543293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.548581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.548616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.548628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.554052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.554087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.554100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.559434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.559467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.559479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.564830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.564883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.564898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.570267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.570308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.570321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.575551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.575598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.580883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.580913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.580925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.586226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.586261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.586273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.182 [2024-05-15 14:03:22.591569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.182 [2024-05-15 14:03:22.591605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.182 [2024-05-15 14:03:22.591622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.596960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.596997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.602371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.602407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.602419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.607704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.607765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.613098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.613134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.613148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.618558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.618595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.618607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.623962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.623995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.624007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.629356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.629387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.629399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.634757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.634790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.634801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.640148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.640183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.640194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.645502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.645548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.650799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.650842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.650853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.656316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.656352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.656364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.661929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.661964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.661976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.667643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.667680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.667694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.673429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.673467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.673479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.679058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.679096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.679108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.684707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.684757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.684769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.690384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.690424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.690436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.696100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.696148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.696165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.701690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.701742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.701756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.707385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.707425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.707437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.713078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.713118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.713132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.718629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.718676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.724152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.183 [2024-05-15 14:03:22.724186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.183 [2024-05-15 14:03:22.724198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.183 [2024-05-15 14:03:22.729752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.184 [2024-05-15 14:03:22.729788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.184 [2024-05-15 14:03:22.729801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.184 [2024-05-15 14:03:22.735543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.184 [2024-05-15 14:03:22.735584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.184 [2024-05-15 14:03:22.735597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.741291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.746979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.747024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.747036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.752548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.752593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.752606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.758237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.758280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.758292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.763885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.763925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.763937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.769545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.769583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.769595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.775196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.775232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.775243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.780831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.780875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.786256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.786291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.786304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.791845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.791881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.797402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.797441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.443 [2024-05-15 14:03:22.797456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.443 [2024-05-15 14:03:22.802934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.443 [2024-05-15 14:03:22.802966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.802977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.808569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.808605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.808622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.814244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.814278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.814290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.819864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.819896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.825289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.825333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.830862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.830895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.830906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.836417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.836453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.836465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.841924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.841956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.841967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.847518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.847563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.853139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.853193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.853209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.858717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.858763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.858775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.864228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.864261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.864273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.869649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.869683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.869695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.875156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.875189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.880782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.880820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.880837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.886237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.886271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.886283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.891783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.891815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.891826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.897277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.897312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.897363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.902844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.902877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.902889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.908406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.908441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.908453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.913876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.913908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.913919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.919260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.919298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.919310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.924677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.924712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.924724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.930117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.930150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.935437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.935470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.940848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.940891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.946275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.946308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.946320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.951699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.951747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.444 [2024-05-15 14:03:22.951759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.444 [2024-05-15 14:03:22.957036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.444 [2024-05-15 14:03:22.957070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.957085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.962366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.962405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.962421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.967756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.967787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.967799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.973112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.973153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.973168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.978542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.978576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.978589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.984207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.984241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.984253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.989596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.989628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.989644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:22.995069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:22.995102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:22.995114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.445 [2024-05-15 14:03:23.000530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.445 [2024-05-15 14:03:23.000565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.445 [2024-05-15 14:03:23.000579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.005917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.005948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.005960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.011254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.011287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.011299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.016801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.016832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.016844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.022177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.022210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.022222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.027673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.027708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.027720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.033137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.033171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.033182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.038455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.038489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.043766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.043808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.049170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.049204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.049219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.054593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.054626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.054637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.060074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.060108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.060119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.065462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.065495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.065506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.071034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.071079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.076727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.076771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.076783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.082170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.082204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.705 [2024-05-15 14:03:23.082215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.705 [2024-05-15 14:03:23.087649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.705 [2024-05-15 14:03:23.087687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.087703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.093108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.093143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.093157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.098547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.098597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.104312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.104351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.104363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.109779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.109809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.109821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.115134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.115167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.115178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.120564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.120596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.120608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.126135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.126177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.131550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.131581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.131592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.136996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.137028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.137040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.142460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.142493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.142505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.147945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.147977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.147993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.153385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.153416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.153427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.158884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.158914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.158927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.164521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.164554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.164566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.170538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.170571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.170582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.176319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.176353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.176364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.181960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.181991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.182003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.187579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.187615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.187627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.193206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.193239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.193251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.198837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.198868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.198880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.204500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.204534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.204545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.210000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.706 [2024-05-15 14:03:23.210037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.706 [2024-05-15 14:03:23.210049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.706 [2024-05-15 14:03:23.215430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.215463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.215475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.220893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.220921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.220933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.226315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.226350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.231837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.231869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.231880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.237401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.237434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.242946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.242981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.242993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.248498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.248543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.254084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.254117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.254129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.707 [2024-05-15 14:03:23.259595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.707 [2024-05-15 14:03:23.259629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.707 [2024-05-15 14:03:23.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.265063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.265098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.265112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.270574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.270610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.270622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.276163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.276197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.276209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.281850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.281880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.281892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.287702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.287746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.287759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.293499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.293532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.299307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.299342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.299355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.304712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.304760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.304772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.310167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.310202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.315622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.315655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.315668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.321159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.321193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.321211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.326690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.326723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.332196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.332238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.332251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.337646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.337680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.337691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.343132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.343165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.343177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.348503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.348543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.348555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.353992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.354026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.354038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.359421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.359454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.359466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.364855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.364890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.364904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.370518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.370550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.370562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.376077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.376129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.381981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.382019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.382031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.387576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.387610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.387622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.393291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.393339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.393352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.398922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.398955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.398969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.404521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.404556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.404568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.410155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.967 [2024-05-15 14:03:23.410191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.967 [2024-05-15 14:03:23.410204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.967 [2024-05-15 14:03:23.415761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.415793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.421293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.421340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.421352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.426845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.426877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.426888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.432363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.432397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.432409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.437797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.437828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.443308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.443342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.443354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.448774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.448810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.448824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.454218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.454252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.454264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.459660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.459694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.459706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.465032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.465072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.465088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.470395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.470429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.470440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.475822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.475867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.481258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.481293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.481307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.486715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.492190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.492224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.492236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.497765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.497796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.497808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.503346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.503382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.503394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.508918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.508951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.508964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.514400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.514435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.514447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.519838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.519871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.519883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.968 [2024-05-15 14:03:23.525362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:24.968 [2024-05-15 14:03:23.525395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.968 [2024-05-15 14:03:23.525407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.530784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.530828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.536258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.536292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.536304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.541661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.541695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.541707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.547111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.547144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.547156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.552559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.552594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.552606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.227 [2024-05-15 14:03:23.558004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.227 [2024-05-15 14:03:23.558034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.227 [2024-05-15 14:03:23.558045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.563426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.563456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.563467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.568971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.569009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.574754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.574781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.574792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.580490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.580519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.580529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.586092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.586121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.586132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.591426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.591455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.591465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.596841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.596868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.596878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.602253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.602283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.602293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.608061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.608090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.608100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.613405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.613432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.613442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.618785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.618813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.618823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.228 [2024-05-15 14:03:23.624055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cc78d0) 00:25:25.228 [2024-05-15 14:03:23.624084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.228 [2024-05-15 14:03:23.624094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.228 00:25:25.228 Latency(us) 00:25:25.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.228 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:25.228 nvme0n1 : 2.00 5618.72 702.34 0.00 0.00 2843.98 2526.69 11264.82 00:25:25.228 =================================================================================================================== 00:25:25.228 Total : 5618.72 702.34 0.00 0.00 2843.98 2526.69 11264.82 00:25:25.228 0 00:25:25.228 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.228 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.228 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.228 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.228 | .driver_specific 00:25:25.228 | .nvme_error 00:25:25.228 | .status_code 00:25:25.228 | .command_transient_transport_error' 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79162 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 79162 ']' 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 79162 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79162 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79162' 00:25:25.488 killing process with pid 79162 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 79162 00:25:25.488 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.488 00:25:25.488 Latency(us) 00:25:25.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.488 =================================================================================================================== 00:25:25.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.488 14:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 79162 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79217 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79217 /var/tmp/bperf.sock 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 79217 ']' 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:25.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:25.747 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.747 [2024-05-15 14:03:24.145355] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:25.747 [2024-05-15 14:03:24.145429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79217 ] 00:25:25.747 [2024-05-15 14:03:24.272863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.006 [2024-05-15 14:03:24.372174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.680 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:26.680 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:26.680 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.681 14:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.681 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.939 nvme0n1 00:25:26.939 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:26.939 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.939 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.939 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.939 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:26.940 14:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:27.198 Running I/O for 2 seconds... 00:25:27.198 [2024-05-15 14:03:25.547404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fef90 00:25:27.198 [2024-05-15 14:03:25.549400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.198 [2024-05-15 14:03:25.549443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.198 [2024-05-15 14:03:25.560372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190feb58 00:25:27.198 [2024-05-15 14:03:25.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.198 [2024-05-15 14:03:25.562350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:27.198 [2024-05-15 14:03:25.572906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fe2e8 00:25:27.198 [2024-05-15 14:03:25.574831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.198 [2024-05-15 14:03:25.574860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:27.198 [2024-05-15 14:03:25.585982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fda78 00:25:27.198 [2024-05-15 14:03:25.587884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.587911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.598621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fd208 00:25:27.199 [2024-05-15 14:03:25.600521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.600548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.611359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc998 00:25:27.199 [2024-05-15 14:03:25.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.613259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.623922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc128 00:25:27.199 [2024-05-15 14:03:25.626036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.626066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.636793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb8b8 00:25:27.199 [2024-05-15 14:03:25.638635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.638663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.649447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb048 00:25:27.199 [2024-05-15 14:03:25.651412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.651444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.662190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fa7d8 00:25:27.199 [2024-05-15 14:03:25.664127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.664159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.674669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f9f68 00:25:27.199 [2024-05-15 14:03:25.676689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.676720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.687681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f96f8 00:25:27.199 [2024-05-15 14:03:25.689470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.689497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.700180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8e88 00:25:27.199 [2024-05-15 14:03:25.702095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.702123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.712850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8618 00:25:27.199 [2024-05-15 14:03:25.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.714766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.725537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7da8 00:25:27.199 [2024-05-15 14:03:25.727445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.727471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.738248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7538 00:25:27.199 [2024-05-15 14:03:25.739966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.739991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:27.199 [2024-05-15 14:03:25.750977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6cc8 00:25:27.199 [2024-05-15 14:03:25.752753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.199 [2024-05-15 14:03:25.752778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.458 [2024-05-15 14:03:25.763619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6458 00:25:27.458 [2024-05-15 14:03:25.765310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.458 [2024-05-15 14:03:25.765342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:27.458 [2024-05-15 14:03:25.776062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5be8 00:25:27.458 [2024-05-15 14:03:25.777731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.458 [2024-05-15 14:03:25.777763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:27.458 [2024-05-15 14:03:25.788914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5378 00:25:27.458 [2024-05-15 14:03:25.790704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.458 [2024-05-15 14:03:25.790730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:27.458 [2024-05-15 14:03:25.802219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4b08 00:25:27.458 [2024-05-15 14:03:25.803984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.804015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.815031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4298 00:25:27.459 [2024-05-15 14:03:25.816806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.816837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.827871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f3a28 00:25:27.459 [2024-05-15 14:03:25.829482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.829508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.840601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f31b8 00:25:27.459 [2024-05-15 14:03:25.842382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.842412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.853423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f2948 00:25:27.459 [2024-05-15 14:03:25.855086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.855111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.866388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f20d8 00:25:27.459 [2024-05-15 14:03:25.868224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.868254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.879141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f1868 00:25:27.459 [2024-05-15 14:03:25.880685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.880713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.892010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f0ff8 00:25:27.459 [2024-05-15 14:03:25.893564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.893592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.904515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f0788 00:25:27.459 [2024-05-15 14:03:25.906108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.917471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eff18 00:25:27.459 [2024-05-15 14:03:25.919099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.919126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.930889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ef6a8 00:25:27.459 [2024-05-15 14:03:25.933213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.933239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.946818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eee38 00:25:27.459 [2024-05-15 14:03:25.948945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.948972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.961653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ee5c8 00:25:27.459 [2024-05-15 14:03:25.963420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.963446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.975413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190edd58 00:25:27.459 [2024-05-15 14:03:25.976862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.976888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:25.989005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ed4e8 00:25:27.459 [2024-05-15 14:03:25.990442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:25.990470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:26.001626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ecc78 00:25:27.459 [2024-05-15 14:03:26.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:26.003192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:27.459 [2024-05-15 14:03:26.014073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ec408 00:25:27.459 [2024-05-15 14:03:26.015686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.459 [2024-05-15 14:03:26.015711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:27.718 [2024-05-15 14:03:26.028823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ebb98 00:25:27.718 [2024-05-15 14:03:26.030632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.718 [2024-05-15 14:03:26.030658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.041999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eb328 00:25:27.719 [2024-05-15 14:03:26.043367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.043394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.054770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eaab8 00:25:27.719 [2024-05-15 14:03:26.056114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.056143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.067551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ea248 00:25:27.719 [2024-05-15 14:03:26.068966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.069003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.080008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e99d8 00:25:27.719 [2024-05-15 14:03:26.081333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.081359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.092805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e9168 00:25:27.719 [2024-05-15 14:03:26.094119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.094146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.105238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e88f8 00:25:27.719 [2024-05-15 14:03:26.106540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.106568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.117923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e8088 00:25:27.719 [2024-05-15 14:03:26.119311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.119339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.130420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e7818 00:25:27.719 [2024-05-15 14:03:26.131854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.131882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.143216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e6fa8 00:25:27.719 [2024-05-15 14:03:26.144464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.155723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e6738 00:25:27.719 [2024-05-15 14:03:26.157153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.157182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.168341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e5ec8 00:25:27.719 [2024-05-15 14:03:26.169565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.169591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.180902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e5658 00:25:27.719 [2024-05-15 14:03:26.182234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.182262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.193601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e4de8 00:25:27.719 [2024-05-15 14:03:26.194794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.194821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.206588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e4578 00:25:27.719 [2024-05-15 14:03:26.207765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.207792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.218801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e3d08 00:25:27.719 [2024-05-15 14:03:26.219973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.220000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.231705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e3498 00:25:27.719 [2024-05-15 14:03:26.232855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.232882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.244158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e2c28 00:25:27.719 [2024-05-15 14:03:26.245282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.245309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.256960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e23b8 00:25:27.719 [2024-05-15 14:03:26.258083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.258111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:27.719 [2024-05-15 14:03:26.269680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e1b48 00:25:27.719 [2024-05-15 14:03:26.270783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.719 [2024-05-15 14:03:26.270809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:28.016 [2024-05-15 14:03:26.282054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e12d8 00:25:28.016 [2024-05-15 14:03:26.283136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.283162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.294617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e0a68 00:25:28.017 [2024-05-15 14:03:26.295831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.295859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.307089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e01f8 00:25:28.017 [2024-05-15 14:03:26.308139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.308169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.319665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190df988 00:25:28.017 [2024-05-15 14:03:26.320700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.332344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190df118 00:25:28.017 [2024-05-15 14:03:26.333390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.333418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.345188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190de8a8 00:25:28.017 [2024-05-15 14:03:26.346271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.346299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.357545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190de038 00:25:28.017 [2024-05-15 14:03:26.358709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.358751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.375504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190de038 00:25:28.017 [2024-05-15 14:03:26.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.377717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.388166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190de8a8 00:25:28.017 [2024-05-15 14:03:26.390240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.390274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.400809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190df118 00:25:28.017 [2024-05-15 14:03:26.402726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.402781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.413796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190df988 00:25:28.017 [2024-05-15 14:03:26.415675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.415702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.426679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e01f8 00:25:28.017 [2024-05-15 14:03:26.428642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.428668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.439238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e0a68 00:25:28.017 [2024-05-15 14:03:26.441097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.441121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.451851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e12d8 00:25:28.017 [2024-05-15 14:03:26.453769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.453795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.464547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e1b48 00:25:28.017 [2024-05-15 14:03:26.466443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.466467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.477557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e23b8 00:25:28.017 [2024-05-15 14:03:26.479506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.479531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.489942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e2c28 00:25:28.017 [2024-05-15 14:03:26.491727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.491759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.502763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e3498 00:25:28.017 [2024-05-15 14:03:26.504536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.504561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.515338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e3d08 00:25:28.017 [2024-05-15 14:03:26.517125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.517152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.528283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e4578 00:25:28.017 [2024-05-15 14:03:26.530116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.530144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.541028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e4de8 00:25:28.017 [2024-05-15 14:03:26.542883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.542910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.553609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e5658 00:25:28.017 [2024-05-15 14:03:26.555332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.555359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.017 [2024-05-15 14:03:26.566544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e5ec8 00:25:28.017 [2024-05-15 14:03:26.568320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.017 [2024-05-15 14:03:26.568347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 14:03:26.578848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e6738 00:25:28.277 [2024-05-15 14:03:26.580729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.277 [2024-05-15 14:03:26.580763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 14:03:26.592034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e6fa8 00:25:28.277 [2024-05-15 14:03:26.593802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.277 [2024-05-15 14:03:26.593838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 14:03:26.604756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e7818 00:25:28.277 [2024-05-15 14:03:26.606528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.277 [2024-05-15 14:03:26.606571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 14:03:26.617695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e8088 00:25:28.277 [2024-05-15 14:03:26.619396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.277 [2024-05-15 14:03:26.619421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 14:03:26.630037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e88f8 00:25:28.278 [2024-05-15 14:03:26.631926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.631956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.642937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e9168 00:25:28.278 [2024-05-15 14:03:26.644551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.644578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.655531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190e99d8 00:25:28.278 [2024-05-15 14:03:26.657217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.657246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.668430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ea248 00:25:28.278 [2024-05-15 14:03:26.670036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.670064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.681097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eaab8 00:25:28.278 [2024-05-15 14:03:26.682826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.682854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.693528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eb328 00:25:28.278 [2024-05-15 14:03:26.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.695111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.706462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ebb98 00:25:28.278 [2024-05-15 14:03:26.708022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.708050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.718953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ec408 00:25:28.278 [2024-05-15 14:03:26.720473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.720500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.731614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ecc78 00:25:28.278 [2024-05-15 14:03:26.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.733301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.744274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ed4e8 00:25:28.278 [2024-05-15 14:03:26.746003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.756961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190edd58 00:25:28.278 [2024-05-15 14:03:26.758450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.769396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ee5c8 00:25:28.278 [2024-05-15 14:03:26.771097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.771129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.781980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eee38 00:25:28.278 [2024-05-15 14:03:26.783433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.783464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.794728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190ef6a8 00:25:28.278 [2024-05-15 14:03:26.796180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.796210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.807304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190eff18 00:25:28.278 [2024-05-15 14:03:26.808723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.808763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.820256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f0788 00:25:28.278 [2024-05-15 14:03:26.821669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.821698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 14:03:26.832528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f0ff8 00:25:28.278 [2024-05-15 14:03:26.833933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.278 [2024-05-15 14:03:26.833961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.845824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f1868 00:25:28.538 [2024-05-15 14:03:26.847215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.847247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.858361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f20d8 00:25:28.538 [2024-05-15 14:03:26.859780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.859809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.871222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f2948 00:25:28.538 [2024-05-15 14:03:26.872575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.872605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.884014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f31b8 00:25:28.538 [2024-05-15 14:03:26.885994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.886024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.897146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f3a28 00:25:28.538 [2024-05-15 14:03:26.898479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.898511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.910040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4298 00:25:28.538 [2024-05-15 14:03:26.911343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.911373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.922563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4b08 00:25:28.538 [2024-05-15 14:03:26.923878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.923908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.935041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5378 00:25:28.538 [2024-05-15 14:03:26.936313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.936344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.947818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5be8 00:25:28.538 [2024-05-15 14:03:26.949103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.949133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.960686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6458 00:25:28.538 [2024-05-15 14:03:26.961948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.961979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.973120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6cc8 00:25:28.538 [2024-05-15 14:03:26.974433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.974463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.985696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7538 00:25:28.538 [2024-05-15 14:03:26.986969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.538 [2024-05-15 14:03:26.986999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:28.538 [2024-05-15 14:03:26.998333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7da8 00:25:28.538 [2024-05-15 14:03:26.999611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:26.999641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.011080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8618 00:25:28.539 [2024-05-15 14:03:27.012259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.012290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.023798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8e88 00:25:28.539 [2024-05-15 14:03:27.025115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.025146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.036164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f96f8 00:25:28.539 [2024-05-15 14:03:27.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.049008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f9f68 00:25:28.539 [2024-05-15 14:03:27.050148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.050174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.061402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fa7d8 00:25:28.539 [2024-05-15 14:03:27.062516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.062543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.074128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb048 00:25:28.539 [2024-05-15 14:03:27.075332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.075357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:28.539 [2024-05-15 14:03:27.086528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb8b8 00:25:28.539 [2024-05-15 14:03:27.087711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.539 [2024-05-15 14:03:27.087752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.099229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc128 00:25:28.799 [2024-05-15 14:03:27.100298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.100325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.111637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc998 00:25:28.799 [2024-05-15 14:03:27.112806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.112833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.124111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fd208 00:25:28.799 [2024-05-15 14:03:27.125152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.136814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fda78 00:25:28.799 [2024-05-15 14:03:27.137912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.137939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.149427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fe2e8 00:25:28.799 [2024-05-15 14:03:27.150440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.150468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.162235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190feb58 00:25:28.799 [2024-05-15 14:03:27.163236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.163263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.179933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fef90 00:25:28.799 [2024-05-15 14:03:27.182085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.192392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190feb58 00:25:28.799 [2024-05-15 14:03:27.194493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.194523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.205090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fe2e8 00:25:28.799 [2024-05-15 14:03:27.207013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.207038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.217792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fda78 00:25:28.799 [2024-05-15 14:03:27.219721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.219763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.230807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fd208 00:25:28.799 [2024-05-15 14:03:27.232678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.232705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.243023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc998 00:25:28.799 [2024-05-15 14:03:27.245051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.245079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.255966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fc128 00:25:28.799 [2024-05-15 14:03:27.257828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.257855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.268417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb8b8 00:25:28.799 [2024-05-15 14:03:27.270266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.270291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.281385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fb048 00:25:28.799 [2024-05-15 14:03:27.283208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.283232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.294217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190fa7d8 00:25:28.799 [2024-05-15 14:03:27.296046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.296072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.306916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f9f68 00:25:28.799 [2024-05-15 14:03:27.308695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.308722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.319803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f96f8 00:25:28.799 [2024-05-15 14:03:27.321580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.321607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.332070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8e88 00:25:28.799 [2024-05-15 14:03:27.334001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.799 [2024-05-15 14:03:27.334026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:28.799 [2024-05-15 14:03:27.344803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f8618 00:25:28.800 [2024-05-15 14:03:27.346790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.800 [2024-05-15 14:03:27.346828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:28.800 [2024-05-15 14:03:27.357334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7da8 00:25:29.058 [2024-05-15 14:03:27.359210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.058 [2024-05-15 14:03:27.359238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:29.058 [2024-05-15 14:03:27.370226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f7538 00:25:29.058 [2024-05-15 14:03:27.371939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.058 [2024-05-15 14:03:27.371963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:29.058 [2024-05-15 14:03:27.382587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6cc8 00:25:29.058 [2024-05-15 14:03:27.384435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.058 [2024-05-15 14:03:27.384459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.058 [2024-05-15 14:03:27.395261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f6458 00:25:29.058 [2024-05-15 14:03:27.396942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.058 [2024-05-15 14:03:27.396967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:29.058 [2024-05-15 14:03:27.407713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5be8 00:25:29.058 [2024-05-15 14:03:27.409538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.058 [2024-05-15 14:03:27.409566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.420489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f5378 00:25:29.059 [2024-05-15 14:03:27.422172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.422197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.433215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4b08 00:25:29.059 [2024-05-15 14:03:27.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.445566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f4298 00:25:29.059 [2024-05-15 14:03:27.447190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.447214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.458318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f3a28 00:25:29.059 [2024-05-15 14:03:27.459931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.459958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.470791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f31b8 00:25:29.059 [2024-05-15 14:03:27.472378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.483423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f2948 00:25:29.059 [2024-05-15 14:03:27.485177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.485202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.496121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f20d8 00:25:29.059 [2024-05-15 14:03:27.497872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.497899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.508700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f1868 00:25:29.059 [2024-05-15 14:03:27.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.510286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:29.059 [2024-05-15 14:03:27.521146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406ab0) with pdu=0x2000190f0ff8 00:25:29.059 [2024-05-15 14:03:27.522889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.059 [2024-05-15 14:03:27.522922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.059 00:25:29.059 Latency(us) 00:25:29.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.059 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:29.059 nvme0n1 : 2.01 19803.82 77.36 0.00 0.00 6458.56 4763.86 24951.06 00:25:29.059 =================================================================================================================== 00:25:29.059 Total : 19803.82 77.36 0.00 0.00 6458.56 4763.86 24951.06 00:25:29.059 0 00:25:29.059 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:29.059 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:29.059 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:29.059 | .driver_specific 00:25:29.059 | .nvme_error 00:25:29.059 | .status_code 00:25:29.059 | .command_transient_transport_error' 00:25:29.059 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79217 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 79217 ']' 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 79217 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79217 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:29.318 killing process with pid 79217 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79217' 00:25:29.318 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.318 00:25:29.318 Latency(us) 00:25:29.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.318 =================================================================================================================== 00:25:29.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 79217 00:25:29.318 14:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 79217 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79279 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79279 /var/tmp/bperf.sock 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 79279 ']' 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:29.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:29.578 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:29.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:29.578 Zero copy mechanism will not be used. 00:25:29.578 [2024-05-15 14:03:28.054317] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:29.578 [2024-05-15 14:03:28.054384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79279 ] 00:25:29.837 [2024-05-15 14:03:28.184670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.837 [2024-05-15 14:03:28.290457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.406 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:30.406 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:30.406 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:30.407 14:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.665 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.922 nvme0n1 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:30.922 14:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.181 Zero copy mechanism will not be used. 00:25:31.181 Running I/O for 2 seconds... 00:25:31.181 [2024-05-15 14:03:29.526552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.181 [2024-05-15 14:03:29.526946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.181 [2024-05-15 14:03:29.526974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.181 [2024-05-15 14:03:29.530585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.181 [2024-05-15 14:03:29.530664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.181 [2024-05-15 14:03:29.530687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.181 [2024-05-15 14:03:29.534429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.181 [2024-05-15 14:03:29.534487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.181 [2024-05-15 14:03:29.534508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.181 [2024-05-15 14:03:29.538260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.181 [2024-05-15 14:03:29.538323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.181 [2024-05-15 14:03:29.538345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.181 [2024-05-15 14:03:29.542287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.181 [2024-05-15 14:03:29.542352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.181 [2024-05-15 14:03:29.542379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.181 [2024-05-15 14:03:29.546358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.546414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.546448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.550265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.550410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.550435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.554290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.554365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.554386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.558108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.558172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.558192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.561465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.561830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.561859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.565327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.565412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.565437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.569618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.569697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.569720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.573639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.573698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.573718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.577666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.577735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.577770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.581539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.581594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.581614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.585505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.585677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.585705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.589237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.589409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.589443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.592661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.592932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.592952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.596286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.596342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.596362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.600498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.600563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.600589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.604370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.604430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.604451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.608120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.608220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.611876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.611993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.612013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.615994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.616070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.616092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.620049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.620172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.624256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.624337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.624359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.627760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.628057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.628077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.631500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.631585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.631611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.635417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.635492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.635513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.639273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.639347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.639368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.643271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.643337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.643363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.647562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.647636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.647656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.651431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.651546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.182 [2024-05-15 14:03:29.651565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.182 [2024-05-15 14:03:29.655263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.182 [2024-05-15 14:03:29.655334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.655355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.658965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.659050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.659069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.663208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.663277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.663304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.666976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.667360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.670836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.670936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.670958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.674853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.674910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.674931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.678961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.679027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.679048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.682648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.682707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.682727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.686400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.686486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.686505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.690499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.690561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.690582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.694628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.694701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.694722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.698464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.698551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.698571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.702429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.702509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.702534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.705885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.705942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.705964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.709872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.709971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.713593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.713645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.713664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.717555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.717615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.717635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.721289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.721355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.721375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.725279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.725390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.729018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.729175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.729194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.732756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.732861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.732881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.183 [2024-05-15 14:03:29.736166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.183 [2024-05-15 14:03:29.736472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.183 [2024-05-15 14:03:29.736504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.740054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.740127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.740150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.744373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.744434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.744456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.748363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.748422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.748443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.752030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.752156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.752180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.756081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.756136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.756157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.759996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.760053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.764191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.764315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.764341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.768253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.768393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.768413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.772482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.772641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.775961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.776215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.776234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.779539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.779608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.779628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.783593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.783653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.787373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.787432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.787452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.443 [2024-05-15 14:03:29.791154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.443 [2024-05-15 14:03:29.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.443 [2024-05-15 14:03:29.791230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.794885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.794961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.794980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.798849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.798922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.798946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.803136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.803242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.803263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.806968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.807032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.807052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.810680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.811029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.811050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.814277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.814353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.814373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.818192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.818274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.821911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.821970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.821989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.825599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.825659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.829536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.829608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.829630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.833492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.833569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.833589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.837231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.837384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.840622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.840867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.840888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.844530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.844616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.844641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.848883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.848957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.848984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.853061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.853118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.853140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.857031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.857104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.857126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.860917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.860981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.861001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.864883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.864962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.864985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.868970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.869048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.872645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.872789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.872809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.876445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.876695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.876722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.880248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.880303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.880323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.884038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.884116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.887781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.887833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.887853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.891556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.891619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.891640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.895528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.444 [2024-05-15 14:03:29.895642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.444 [2024-05-15 14:03:29.899274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.444 [2024-05-15 14:03:29.899408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.899428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.903416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.903557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.903579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.907620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.907697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.907722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.911167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.911502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.911535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.914876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.914964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.914983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.918628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.918688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.918709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.922853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.922917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.922939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.927010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.927092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.930887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.930971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.930991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.934649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.934726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.938572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.938643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.938668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.942585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.942686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.942706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.946376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.946596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.946616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.950486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.950793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.950826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.954499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.954558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.954579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.958554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.958619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.958640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.962325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.962388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.962409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.966079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.966139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.966158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.970209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.974616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.974705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.978503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.978613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.978634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.982266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.982416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.982436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.985752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.986054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.986079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.989731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.989798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.989819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.993597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.993649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.445 [2024-05-15 14:03:29.997660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.445 [2024-05-15 14:03:29.997717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.445 [2024-05-15 14:03:29.997752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.714 [2024-05-15 14:03:30.002065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.714 [2024-05-15 14:03:30.002148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.714 [2024-05-15 14:03:30.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.714 [2024-05-15 14:03:30.006008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.714 [2024-05-15 14:03:30.006105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.714 [2024-05-15 14:03:30.006126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.714 [2024-05-15 14:03:30.009930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.714 [2024-05-15 14:03:30.010006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.714 [2024-05-15 14:03:30.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.714 [2024-05-15 14:03:30.013708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.714 [2024-05-15 14:03:30.013806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.714 [2024-05-15 14:03:30.013826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.714 [2024-05-15 14:03:30.017512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.714 [2024-05-15 14:03:30.017568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.017588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.020910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.021246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.021270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.024607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.024686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.024706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.028306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.028357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.028376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.032020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.032086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.032105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.036011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.036074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.036094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.039785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.039856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.043507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.043566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.043585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.047340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.047476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.047495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.050765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.051035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.051059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.054425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.054482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.058180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.058241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.058261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.062021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.062075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.065894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.065945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.065966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.069841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.069902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.069921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.073842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.073911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.073930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.077697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.077768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.077789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.081506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.081599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.081621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.085145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.085326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.085367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.088765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.088821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.088841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.092519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.092571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.092591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.096297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.096351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.096370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.100068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.100123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.100143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.104062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.104117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.104137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.107861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.107915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.107936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.715 [2024-05-15 14:03:30.111726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.715 [2024-05-15 14:03:30.111822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.715 [2024-05-15 14:03:30.111842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.115559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.115722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.115755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.118989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.119257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.119276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.122611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.122665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.122684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.126398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.126459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.126480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.130159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.130214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.130233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.133999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.134097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.137809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.137857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.137877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.141611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.141668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.141687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.145421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.145511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.149282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.149371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.152652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.153008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.156317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.156395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.156415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.160239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.160299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.160320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.164077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.164136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.164156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.167951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.168011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.168031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.171701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.171774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.175478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.175556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.175576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.179283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.179429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.179448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.183033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.183156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.183175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.186443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.186714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.190031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.190083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.190102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.193820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.193879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.193898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.197616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.197669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.197687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.201363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.201441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.201461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.205139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.205191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.205210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.208891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.208960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.208980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.212640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.212708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.716 [2024-05-15 14:03:30.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.716 [2024-05-15 14:03:30.216458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.716 [2024-05-15 14:03:30.216521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.219820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.220147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.220171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.223500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.223585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.223604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.227271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.227343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.227361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.231077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.231145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.231164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.234883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.234945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.234963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.238610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.238663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.238682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.242356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.242408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.242427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.246065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.246199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.246218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.249468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.249742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.249761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.253088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.253142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.253161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.256958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.257015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.257035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.260934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.260990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.261009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.264856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.264911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.264930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.268636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.268692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.268711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.717 [2024-05-15 14:03:30.272402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.717 [2024-05-15 14:03:30.272462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.717 [2024-05-15 14:03:30.272481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.276206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.276277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.279927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.279978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.279997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.283670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.283820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.283839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.287103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.287354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.287372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.290686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.290754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.290773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.294417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.294480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.294499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.298243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.298294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.298313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.301973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.302051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.302070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.305706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.305771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.305791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.309434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.309533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.309552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.313165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.313246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.313265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.316923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.316976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.316996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.320186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.320534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.323865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.323941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.323960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.327623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.327677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.327696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.331365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.331425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.331444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.335150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.335206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.335224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.977 [2024-05-15 14:03:30.339032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.977 [2024-05-15 14:03:30.339083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-05-15 14:03:30.339102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.342795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.342867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.342886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.346747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.346823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.346841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.350488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.350616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.350635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.353925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.354184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.354209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.357528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.357581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.357600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.361294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.361352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.361371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.365124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.365177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.365197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.368852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.368911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.368929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.372598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.372650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.372668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.376414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.376480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.376499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.380169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.380221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.380239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.383927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.383993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.384011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.387285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.387630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.390877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.390950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.390969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.394598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.394648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.394668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.398516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.398576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.398595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.402364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.402416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.402435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.406076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.406135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.406154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.409870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.409953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.409973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.413671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.413785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.413804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.417441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.417513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.417532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.421239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.421293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.421312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.424590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.424941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.424959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.428212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.428285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.428304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.431923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.431982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.432000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.435709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.435784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.435803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.439424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.978 [2024-05-15 14:03:30.439484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-05-15 14:03:30.439503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.978 [2024-05-15 14:03:30.443104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.443167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.443185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.446886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.446974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.446994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.450656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.450793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.450813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.454092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.454340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.454358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.457611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.457665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.457683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.461414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.461468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.461487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.465208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.465260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.465278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.469028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.469089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.469108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.472831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.472923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.472943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.476599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.476665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.476685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.480337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.480414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.480433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.484045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.484204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.484223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.487482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.487776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.487795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.491046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.491101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.491120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.494944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.494997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.495015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.498830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.498880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.498899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.502648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.502703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.502722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.506445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.506512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.506532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.510253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.510316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.510336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.513988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.514049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.514068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.517723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.517868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.521470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.521620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.524895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.525159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.525177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.528503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.528555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.528574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-05-15 14:03:30.532289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:31.979 [2024-05-15 14:03:30.532340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-05-15 14:03:30.532360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.536055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.536107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.536126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.539861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.539914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.539933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.543663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.543718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.543751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.547492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.547544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.547563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.551206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.551283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.551303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.554969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.555031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.558273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.558613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.561885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.561956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.561975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.565568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.565618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.565637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.569298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.569364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.569383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.573020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.573076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.573096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.576778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.576861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.580457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.580619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.580638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.584232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.584343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.584362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.587622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.587890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.587909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.591230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.591303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.594948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.595001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.595019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.598664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.598723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.598753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.602397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.602451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.602470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.606167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.606232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.606251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.609909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.609978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.609997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.613646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.613721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.613750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.240 [2024-05-15 14:03:30.617362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.240 [2024-05-15 14:03:30.617525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.240 [2024-05-15 14:03:30.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.620840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.621128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.621146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.624755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.625113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.625136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.628408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.628480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.628499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.632133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.632189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.632208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.635904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.635958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.635977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.639576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.639631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.639651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.643451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.643514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.643533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.647243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.647309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.647329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.650957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.651043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.651062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.654745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.654805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.654824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.658081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.658409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.658433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.661723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.661806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.661825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.665521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.665570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.669277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.669336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.669356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.673001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.673061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.673080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.676808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.676857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.676887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.680589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.680744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.684352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.684474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.684494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.688231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.688492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.688513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.691821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.691873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.695593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.695665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.699384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.699437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.699456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.703133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.703209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.703228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.706887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.706994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.707013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.710607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.710664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.710684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.714355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.714446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.718169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.718239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.721466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.241 [2024-05-15 14:03:30.721808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.241 [2024-05-15 14:03:30.721832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.241 [2024-05-15 14:03:30.725058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.725131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.725150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.728756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.728803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.728822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.732472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.732530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.732549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.736308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.736362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.736381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.740128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.740186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.740204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.743920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.743972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.743991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.747684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.747802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.751493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.751615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.755076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.755329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.755349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.758757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.758821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.758841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.762587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.762640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.762659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.766479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.766540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.766561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.770231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.770285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.770304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.773977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.774053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.777668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.777755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.777775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.781401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.781537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.781557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.785123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.785192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.785213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.788907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.788967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.788986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.792267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.792572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.792591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.242 [2024-05-15 14:03:30.795887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.242 [2024-05-15 14:03:30.795961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.242 [2024-05-15 14:03:30.795981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.799576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.799634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.503 [2024-05-15 14:03:30.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.803318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.503 [2024-05-15 14:03:30.803389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.807019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.807081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.503 [2024-05-15 14:03:30.807100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.810656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.810745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.503 [2024-05-15 14:03:30.810765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.814419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.814505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.503 [2024-05-15 14:03:30.814525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.503 [2024-05-15 14:03:30.818175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.503 [2024-05-15 14:03:30.818312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.818330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.821521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.821766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.825105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.825181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.828839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.828890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.828908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.832525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.832578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.832597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.836210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.836278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.836298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.839927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.839978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.839997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.843631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.843708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.843727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.847360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.847433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.847453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.851071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.851135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.851154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.854400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.854705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.854723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.857982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.858053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.858072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.861622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.861676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.861694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.865349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.865409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.865428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.868976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.869032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.869051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.872669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.872721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.876406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.876512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.876531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.880134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.880269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.883528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.883766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.887152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.887207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.887227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.890864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.890914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.890933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.894536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.894589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.894608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.898184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.898280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.898299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.901891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.901962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.905579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.905648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.905667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.909253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.909339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.909359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.912986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.913041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.913060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.916330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.504 [2024-05-15 14:03:30.916681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.504 [2024-05-15 14:03:30.916705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.504 [2024-05-15 14:03:30.920016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.920068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.920087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.923697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.923767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.923787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.927437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.927487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.927506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.931102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.931157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.931176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.934794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.934934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.934952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.938500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.938620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.938638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.942247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.942345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.945982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.946062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.949304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.949644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.949668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.952932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.953005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.953024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.956688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.956750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.956770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.960391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.960446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.960466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.964064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.964118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.964138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.967804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.967861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.967880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.971511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.971619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.971638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.975224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.975357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.975376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.978607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.978846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.978866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.982188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.982243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.982262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.985942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.985993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.986013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.989692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.989759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.989779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.993429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.993487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.993506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:30.997184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:30.997240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:30.997259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.000910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:31.000995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.004710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.004791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:31.004810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.008521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.008575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:31.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.011999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.012335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:31.012354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.015614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.505 [2024-05-15 14:03:31.015707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.505 [2024-05-15 14:03:31.019318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.505 [2024-05-15 14:03:31.019372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.019391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.023018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.023077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.023096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.026661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.026717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.026749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.030401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.030455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.030474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.034179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.034229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.034249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.037909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.037985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.038004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.041625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.041686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.041705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.044969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.045271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.045290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.048570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.048647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.048665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.052262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.052315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.056002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.056083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.506 [2024-05-15 14:03:31.059772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.506 [2024-05-15 14:03:31.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.506 [2024-05-15 14:03:31.059860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.063552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.063638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.063657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.067365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.067442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.067461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.071096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.071236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.071254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.074477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.074765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.078095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.078149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.078167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.081837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.081896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.081915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.085576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.085628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.085647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.089272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.089348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.089367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.093027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.093081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.093100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.096854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.096904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.096923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.100608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.100674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.100694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.104389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.104446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.104465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.107806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.108152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.108170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.111414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.111485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.111505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.115144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.115198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.115218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.767 [2024-05-15 14:03:31.118885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.767 [2024-05-15 14:03:31.118959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.767 [2024-05-15 14:03:31.118978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.122561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.122619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.122638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.126320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.126373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.126392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.130116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.130203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.130222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.133864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.134004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.134023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.137248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.137509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.140813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.140876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.140894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.144470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.144521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.144540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.148244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.148305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.148325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.151968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.152022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.152040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.155769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.155822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.155841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.159518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.159587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.163277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.163354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.163373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.167141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.167270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.167289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.170537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.170811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.170830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.174122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.174173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.174192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.177887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.177938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.177957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.181630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.181679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.181698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.185406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.185509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.185527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.189157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.189225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.189244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.192880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.192942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.192960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.196636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.196711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.196730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.200480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.200641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.200661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.204418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.204579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.204598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.208337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.208478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.208498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.211784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.212050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.212069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.215375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.215431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.215449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.219117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.768 [2024-05-15 14:03:31.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.768 [2024-05-15 14:03:31.219189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.768 [2024-05-15 14:03:31.222896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.222948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.222967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.226670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.226720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.226751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.230436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.230491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.230510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.234189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.234276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.237949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.238029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.238048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.241689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.241749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.241769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.245020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.245361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.245384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.248761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.248850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.252481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.252536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.252555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.256275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.256326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.256345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.260055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.260108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.260127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.263849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.263901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.263920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.267630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.267692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.267711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.271422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.271500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.271519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.275214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.275273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.275292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.278608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.278964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.282300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.282371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.282389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.286094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.286148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.286167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.289862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.289918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.289937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.293614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.293672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.293691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.297396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.297475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.297494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.301236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.301304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.301335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.304943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.305077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.305097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.308384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.308647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.308666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.311917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.311968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.311986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.315695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.315757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.769 [2024-05-15 14:03:31.315776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.769 [2024-05-15 14:03:31.319583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.769 [2024-05-15 14:03:31.319643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.770 [2024-05-15 14:03:31.319662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.770 [2024-05-15 14:03:31.323446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:32.770 [2024-05-15 14:03:31.323500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.770 [2024-05-15 14:03:31.323520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.327129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.327183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.330835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.330892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.330911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.334538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.334594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.334613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.338337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.338412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.338430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.342084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.342165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.345466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.345794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.345813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.349106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.349182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.349201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.352842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.352896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.352915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.356602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.356657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.356676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.360388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.360456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.360475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.364158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.364217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.364236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.367962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.368017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.368036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.371723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.371864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.375153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.375383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.375401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.030 [2024-05-15 14:03:31.378718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.030 [2024-05-15 14:03:31.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.030 [2024-05-15 14:03:31.378824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.382475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.386241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.386295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.386314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.390036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.390091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.390111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.393796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.393896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.397500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.397620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.397639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.401221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.401322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.401342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.404963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.405036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.405055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.408347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.408701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.408727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.412066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.412117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.412137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.415788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.415838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.415858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.419549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.419620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.419638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.423316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.423393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.423411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.427063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.427114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.427133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.430784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.430851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.430870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.434554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.434623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.434642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.438354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.438409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.438427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.441743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.442076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.442100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.445290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.445376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.445395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.449083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.449132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.449152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.452869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.452920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.452939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.456614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.456675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.456694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.460398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.460451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.460470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.464240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.464321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.464339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.031 [2024-05-15 14:03:31.467959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.031 [2024-05-15 14:03:31.468090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.031 [2024-05-15 14:03:31.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.471354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.471616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.471634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.475193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.475558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.475583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.478822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.478902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.482631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.482683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.482702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.486422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.486479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.486498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.490170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.490224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.490243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.493933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.493990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.494009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.497716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.497794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.497813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.501527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.501584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.501603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.505342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.505438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.505457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.032 [2024-05-15 14:03:31.509131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2406df0) with pdu=0x2000190fef90 00:25:33.032 [2024-05-15 14:03:31.509198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.032 [2024-05-15 14:03:31.509218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.032 00:25:33.032 Latency(us) 00:25:33.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.032 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:33.032 nvme0n1 : 2.00 8207.64 1025.95 0.00 0.00 1945.67 1335.72 11264.82 00:25:33.032 =================================================================================================================== 00:25:33.032 Total : 8207.64 1025.95 0.00 0.00 1945.67 1335.72 11264.82 00:25:33.032 0 00:25:33.032 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:33.032 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:33.032 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:33.032 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:33.032 | .driver_specific 00:25:33.032 | .nvme_error 00:25:33.032 | .status_code 00:25:33.032 | .command_transient_transport_error' 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 529 > 0 )) 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79279 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 79279 ']' 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 79279 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79279 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:33.290 killing process with pid 79279 00:25:33.290 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79279' 00:25:33.290 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.290 00:25:33.290 Latency(us) 00:25:33.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.290 =================================================================================================================== 00:25:33.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.291 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 79279 00:25:33.291 14:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 79279 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79074 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 79074 ']' 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 79074 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79074 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:33.547 killing process with pid 79074 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79074' 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 79074 00:25:33.547 [2024-05-15 14:03:32.025463] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:33.547 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 79074 00:25:33.805 00:25:33.805 real 0m17.268s 00:25:33.805 user 0m31.311s 00:25:33.805 sys 0m5.673s 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.805 ************************************ 00:25:33.805 END TEST nvmf_digest_error 00:25:33.805 ************************************ 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:33.805 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:33.805 rmmod nvme_tcp 00:25:33.805 rmmod nvme_fabrics 00:25:34.062 rmmod nvme_keyring 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79074 ']' 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79074 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 79074 ']' 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 79074 00:25:34.062 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (79074) - No such process 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 79074 is not found' 00:25:34.062 Process with pid 79074 is not found 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:34.062 ************************************ 00:25:34.062 END TEST nvmf_digest 00:25:34.062 ************************************ 00:25:34.062 00:25:34.062 real 0m35.662s 00:25:34.062 user 1m3.625s 00:25:34.062 sys 0m11.483s 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:34.062 14:03:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:34.062 14:03:32 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:25:34.062 14:03:32 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:25:34.062 14:03:32 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:34.062 14:03:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:34.062 14:03:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:34.062 14:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.062 ************************************ 00:25:34.062 START TEST nvmf_host_multipath 00:25:34.062 ************************************ 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:34.062 * Looking for test storage... 00:25:34.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.062 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:34.063 Cannot find device "nvmf_tgt_br" 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:34.063 Cannot find device "nvmf_tgt_br2" 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:34.063 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:34.321 Cannot find device "nvmf_tgt_br" 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:34.321 Cannot find device "nvmf_tgt_br2" 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:34.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:34.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:34.321 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:34.322 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:34.322 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:34.322 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:34.322 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:34.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:25:34.580 00:25:34.580 --- 10.0.0.2 ping statistics --- 00:25:34.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.580 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:34.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:34.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:25:34.580 00:25:34.580 --- 10.0.0.3 ping statistics --- 00:25:34.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.580 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:34.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:25:34.580 00:25:34.580 --- 10.0.0.1 ping statistics --- 00:25:34.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.580 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:34.580 14:03:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=79539 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 79539 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 79539 ']' 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.580 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:34.580 [2024-05-15 14:03:33.058803] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:25:34.580 [2024-05-15 14:03:33.058901] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.837 [2024-05-15 14:03:33.201766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:34.837 [2024-05-15 14:03:33.287743] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.837 [2024-05-15 14:03:33.287793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.837 [2024-05-15 14:03:33.287803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.837 [2024-05-15 14:03:33.287811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.837 [2024-05-15 14:03:33.287818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.837 [2024-05-15 14:03:33.289320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.837 [2024-05-15 14:03:33.289367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.403 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:35.403 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:25:35.403 14:03:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.403 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.403 14:03:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:35.661 14:03:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.661 14:03:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79539 00:25:35.661 14:03:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:35.661 [2024-05-15 14:03:34.170433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.661 14:03:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:35.919 Malloc0 00:25:35.919 14:03:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:36.177 14:03:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.435 14:03:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.435 [2024-05-15 14:03:34.958029] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:36.435 [2024-05-15 14:03:34.958259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.435 14:03:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:36.693 [2024-05-15 14:03:35.154022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:36.693 14:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79592 00:25:36.693 14:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79592 /var/tmp/bdevperf.sock 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 79592 ']' 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:36.694 14:03:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:37.646 14:03:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:37.646 14:03:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:25:37.646 14:03:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:37.914 14:03:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:38.172 Nvme0n1 00:25:38.172 14:03:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:38.430 Nvme0n1 00:25:38.430 14:03:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:38.430 14:03:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:39.366 14:03:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:39.366 14:03:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.625 14:03:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:39.884 14:03:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:39.884 14:03:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:39.884 14:03:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79636 00:25:39.884 14:03:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.449 Attaching 4 probes... 00:25:46.449 @path[10.0.0.2, 4421]: 22429 00:25:46.449 @path[10.0.0.2, 4421]: 23169 00:25:46.449 @path[10.0.0.2, 4421]: 23032 00:25:46.449 @path[10.0.0.2, 4421]: 23041 00:25:46.449 @path[10.0.0.2, 4421]: 23114 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79636 00:25:46.449 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.450 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:46.450 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.450 14:03:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:46.708 14:03:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:46.708 14:03:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79744 00:25:46.708 14:03:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:46.708 14:03:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.358 Attaching 4 probes... 00:25:53.358 @path[10.0.0.2, 4420]: 23247 00:25:53.358 @path[10.0.0.2, 4420]: 23683 00:25:53.358 @path[10.0.0.2, 4420]: 22763 00:25:53.358 @path[10.0.0.2, 4420]: 21597 00:25:53.358 @path[10.0.0.2, 4420]: 21480 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79744 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79862 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:53.358 14:03:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:59.924 Attaching 4 probes... 00:25:59.924 @path[10.0.0.2, 4421]: 17552 00:25:59.924 @path[10.0.0.2, 4421]: 23037 00:25:59.924 @path[10.0.0.2, 4421]: 22440 00:25:59.924 @path[10.0.0.2, 4421]: 22364 00:25:59.924 @path[10.0.0.2, 4421]: 22132 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:59.924 14:03:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79862 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79974 00:25:59.924 14:03:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:06.488 Attaching 4 probes... 00:26:06.488 00:26:06.488 00:26:06.488 00:26:06.488 00:26:06.488 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79974 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.488 14:04:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.747 14:04:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:06.747 14:04:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80092 00:26:06.747 14:04:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:06.747 14:04:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:13.351 Attaching 4 probes... 00:26:13.351 @path[10.0.0.2, 4421]: 17751 00:26:13.351 @path[10.0.0.2, 4421]: 18152 00:26:13.351 @path[10.0.0.2, 4421]: 18167 00:26:13.351 @path[10.0.0.2, 4421]: 18541 00:26:13.351 @path[10.0.0.2, 4421]: 18014 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80092 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:13.351 14:04:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:14.288 14:04:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:14.288 14:04:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80210 00:26:14.288 14:04:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:14.288 14:04:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:20.948 Attaching 4 probes... 00:26:20.948 @path[10.0.0.2, 4420]: 17876 00:26:20.948 @path[10.0.0.2, 4420]: 18212 00:26:20.948 @path[10.0.0.2, 4420]: 16743 00:26:20.948 @path[10.0.0.2, 4420]: 14916 00:26:20.948 @path[10.0.0.2, 4420]: 15254 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80210 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:20.948 14:04:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:20.948 [2024-05-15 14:04:19.013695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:20.948 14:04:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:20.948 14:04:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:27.518 14:04:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:27.518 14:04:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80384 00:26:27.518 14:04:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:27.518 14:04:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:32.844 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:32.844 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:33.102 Attaching 4 probes... 00:26:33.102 @path[10.0.0.2, 4421]: 18277 00:26:33.102 @path[10.0.0.2, 4421]: 18310 00:26:33.102 @path[10.0.0.2, 4421]: 17431 00:26:33.102 @path[10.0.0.2, 4421]: 17942 00:26:33.102 @path[10.0.0.2, 4421]: 20890 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80384 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79592 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 79592 ']' 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 79592 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79592 00:26:33.102 killing process with pid 79592 00:26:33.102 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:33.103 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:33.103 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79592' 00:26:33.103 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 79592 00:26:33.103 14:04:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 79592 00:26:33.103 Connection closed with partial response: 00:26:33.103 00:26:33.103 00:26:33.370 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79592 00:26:33.370 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:33.370 [2024-05-15 14:03:35.224710] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:26:33.370 [2024-05-15 14:03:35.224898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79592 ] 00:26:33.370 [2024-05-15 14:03:35.365129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.370 [2024-05-15 14:03:35.465771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.370 Running I/O for 90 seconds... 00:26:33.370 [2024-05-15 14:03:44.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:44.999823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:44.999870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:44.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:44.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:44.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:44.999934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:44.999946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:44.999964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:44.999977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:44.999994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:33.370 [2024-05-15 14:03:45.000461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.370 [2024-05-15 14:03:45.000473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.000503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.000532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.000567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.000979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.000997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.371 [2024-05-15 14:03:45.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.371 [2024-05-15 14:03:45.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.371 [2024-05-15 14:03:45.001764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.001794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.001980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.001998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.372 [2024-05-15 14:03:45.002653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.002978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.002996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.003010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:33.372 [2024-05-15 14:03:45.003028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.372 [2024-05-15 14:03:45.003041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.003072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.003103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.003133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.003342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:45.004576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.004966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.004980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:45.005201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:45.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.514830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.514889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.514936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.514951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.514971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.514986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.515019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.515051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.515138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.373 [2024-05-15 14:03:51.515171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:51.515204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:51.515236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:51.515269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:51.515302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:33.373 [2024-05-15 14:03:51.515321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.373 [2024-05-15 14:03:51.515335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.515709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.515829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.515897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.515983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.516003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.516037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.374 [2024-05-15 14:03:51.516071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.374 [2024-05-15 14:03:51.516666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:33.374 [2024-05-15 14:03:51.516685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.516699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.516732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.516969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.516983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.375 [2024-05-15 14:03:51.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.517979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.518012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.518025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.518044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.518058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.518077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.518091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.375 [2024-05-15 14:03:51.518110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.375 [2024-05-15 14:03:51.518125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.518441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.518805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.518819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.519519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.519643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-05-15 14:03:51.519685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.519969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.519982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:51.520261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:51.520275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.376 [2024-05-15 14:03:58.409434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.376 [2024-05-15 14:03:58.409499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.409958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.409982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.409994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-05-15 14:03:58.410457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.377 [2024-05-15 14:03:58.410795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.377 [2024-05-15 14:03:58.410807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.410837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.410928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.410959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.410976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.410989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.411782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.411980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.412010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-05-15 14:03:58.412041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:33.378 [2024-05-15 14:03:58.412061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.378 [2024-05-15 14:03:58.412074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.379 [2024-05-15 14:03:58.412534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.412971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.412989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.413002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.413020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.413036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.413054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.413068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.413085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.413098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.413116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.379 [2024-05-15 14:03:58.413128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.379 [2024-05-15 14:03:58.413781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:03:58.413805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:03:58.413846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.413870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:03:58.413883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:03:58.413920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.413944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:03:58.413957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.413982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.413994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:03:58.414258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:03:58.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.544969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.544989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.380 [2024-05-15 14:04:11.545537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.380 [2024-05-15 14:04:11.545620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.380 [2024-05-15 14:04:11.545634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.545974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.545987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.546016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.381 [2024-05-15 14:04:11.546878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.546905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.546934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.546962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.546977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.381 [2024-05-15 14:04:11.546990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.381 [2024-05-15 14:04:11.547005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.547563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.547977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.547991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.548005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.382 [2024-05-15 14:04:11.548033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.382 [2024-05-15 14:04:11.548190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.382 [2024-05-15 14:04:11.548206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.548373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c0850 is same with the state(5) to be set 00:26:33.383 [2024-05-15 14:04:11.548404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68544 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68552 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68560 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68568 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68576 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68584 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68592 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.548937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.383 [2024-05-15 14:04:11.548947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.383 [2024-05-15 14:04:11.548956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68600 len:8 PRP1 0x0 PRP2 0x0 00:26:33.383 [2024-05-15 14:04:11.548968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.549016] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c0850 was disconnected and freed. reset controller. 00:26:33.383 [2024-05-15 14:04:11.550099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.383 [2024-05-15 14:04:11.550186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.383 [2024-05-15 14:04:11.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.383 [2024-05-15 14:04:11.550249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6970 (9): Bad file descriptor 00:26:33.383 [2024-05-15 14:04:11.550610] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-05-15 14:04:11.550687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-05-15 14:04:11.550730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-05-15 14:04:11.550765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c6970 with addr=10.0.0.2, port=4421 00:26:33.384 [2024-05-15 14:04:11.550784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c6970 is same with the state(5) to be set 00:26:33.384 [2024-05-15 14:04:11.550819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6970 (9): Bad file descriptor 00:26:33.384 [2024-05-15 14:04:11.550850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.384 [2024-05-15 14:04:11.550869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.384 [2024-05-15 14:04:11.550888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.384 [2024-05-15 14:04:11.550922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.384 [2024-05-15 14:04:11.550938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.384 [2024-05-15 14:04:21.586319] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:33.384 Received shutdown signal, test time was about 54.668714 seconds 00:26:33.384 00:26:33.384 Latency(us) 00:26:33.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.384 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:33.384 Verification LBA range: start 0x0 length 0x4000 00:26:33.384 Nvme0n1 : 54.67 8503.33 33.22 0.00 0.00 15036.96 145.58 7061253.96 00:26:33.384 =================================================================================================================== 00:26:33.384 Total : 8503.33 33.22 0.00 0.00 15036.96 145.58 7061253.96 00:26:33.384 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.642 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:33.642 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:33.642 14:04:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:33.642 14:04:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:33.642 14:04:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.642 rmmod nvme_tcp 00:26:33.642 rmmod nvme_fabrics 00:26:33.642 rmmod nvme_keyring 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 79539 ']' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 79539 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 79539 ']' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 79539 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79539 00:26:33.642 killing process with pid 79539 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79539' 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 79539 00:26:33.642 [2024-05-15 14:04:32.116667] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:33.642 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 79539 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:33.901 00:26:33.901 real 0m59.944s 00:26:33.901 user 2m41.385s 00:26:33.901 sys 0m22.733s 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:33.901 14:04:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:33.901 ************************************ 00:26:33.901 END TEST nvmf_host_multipath 00:26:33.901 ************************************ 00:26:34.161 14:04:32 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:34.161 14:04:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:34.161 14:04:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:34.161 14:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.161 ************************************ 00:26:34.161 START TEST nvmf_timeout 00:26:34.161 ************************************ 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:34.161 * Looking for test storage... 00:26:34.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:34.161 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:34.162 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:34.422 Cannot find device "nvmf_tgt_br" 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.422 Cannot find device "nvmf_tgt_br2" 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:34.422 Cannot find device "nvmf_tgt_br" 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:34.422 Cannot find device "nvmf_tgt_br2" 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:34.422 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:34.682 14:04:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:34.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:26:34.682 00:26:34.682 --- 10.0.0.2 ping statistics --- 00:26:34.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.682 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:34.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:34.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:26:34.682 00:26:34.682 --- 10.0.0.3 ping statistics --- 00:26:34.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.682 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:34.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:26:34.682 00:26:34.682 --- 10.0.0.1 ping statistics --- 00:26:34.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.682 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=80692 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 80692 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 80692 ']' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:34.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.682 14:04:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:34.682 [2024-05-15 14:04:33.157422] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:26:34.682 [2024-05-15 14:04:33.157496] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.953 [2024-05-15 14:04:33.302784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:34.953 [2024-05-15 14:04:33.401890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.953 [2024-05-15 14:04:33.401940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.953 [2024-05-15 14:04:33.401951] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.953 [2024-05-15 14:04:33.401959] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.953 [2024-05-15 14:04:33.401967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.953 [2024-05-15 14:04:33.402357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.953 [2024-05-15 14:04:33.402360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.533 14:04:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:35.533 14:04:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:35.533 14:04:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.533 14:04:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.533 14:04:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.792 14:04:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.792 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:35.792 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.792 [2024-05-15 14:04:34.324900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.792 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:36.050 Malloc0 00:26:36.050 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.307 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.565 14:04:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.823 [2024-05-15 14:04:35.171930] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:36.823 [2024-05-15 14:04:35.172214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=80745 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 80745 /var/tmp/bdevperf.sock 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 80745 ']' 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:36.823 14:04:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.823 [2024-05-15 14:04:35.241297] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:26:36.823 [2024-05-15 14:04:35.241387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80745 ] 00:26:37.081 [2024-05-15 14:04:35.383980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.081 [2024-05-15 14:04:35.483030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.648 14:04:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:37.648 14:04:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:37.648 14:04:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:37.907 14:04:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:38.165 NVMe0n1 00:26:38.165 14:04:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=80763 00:26:38.165 14:04:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:38.165 14:04:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:38.165 Running I/O for 10 seconds... 00:26:39.099 14:04:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.358 [2024-05-15 14:04:37.820979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.358 [2024-05-15 14:04:37.821858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.358 [2024-05-15 14:04:37.821918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.358 [2024-05-15 14:04:37.821929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.821938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.821949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.821958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.821969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.821978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.821989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.821998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.822501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.822987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.822998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.359 [2024-05-15 14:04:37.823331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.359 [2024-05-15 14:04:37.823523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.359 [2024-05-15 14:04:37.823532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.360 [2024-05-15 14:04:37.823660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.360 [2024-05-15 14:04:37.823808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa46260 is same with the state(5) to be set 00:26:39.360 [2024-05-15 14:04:37.823831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:39.360 [2024-05-15 14:04:37.823838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:39.360 [2024-05-15 14:04:37.823846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94008 len:8 PRP1 0x0 PRP2 0x0 00:26:39.360 [2024-05-15 14:04:37.823856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.360 [2024-05-15 14:04:37.823905] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa46260 was disconnected and freed. reset controller. 00:26:39.360 [2024-05-15 14:04:37.824126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.360 [2024-05-15 14:04:37.824202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6c80 (9): Bad file descriptor 00:26:39.360 [2024-05-15 14:04:37.824289] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-05-15 14:04:37.824345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-05-15 14:04:37.824377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-05-15 14:04:37.824389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f6c80 with addr=10.0.0.2, port=4420 00:26:39.360 [2024-05-15 14:04:37.824398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6c80 is same with the state(5) to be set 00:26:39.360 [2024-05-15 14:04:37.824415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6c80 (9): Bad file descriptor 00:26:39.360 [2024-05-15 14:04:37.824430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.360 [2024-05-15 14:04:37.824447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.360 [2024-05-15 14:04:37.824462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.360 [2024-05-15 14:04:37.824480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.360 [2024-05-15 14:04:37.824489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.360 14:04:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:41.263 [2024-05-15 14:04:39.821361] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.263 [2024-05-15 14:04:39.821440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.263 [2024-05-15 14:04:39.821475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.263 [2024-05-15 14:04:39.821488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f6c80 with addr=10.0.0.2, port=4420 00:26:41.263 [2024-05-15 14:04:39.821501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6c80 is same with the state(5) to be set 00:26:41.263 [2024-05-15 14:04:39.821526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6c80 (9): Bad file descriptor 00:26:41.263 [2024-05-15 14:04:39.821550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.263 [2024-05-15 14:04:39.821560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.263 [2024-05-15 14:04:39.821571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.263 [2024-05-15 14:04:39.821596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.263 [2024-05-15 14:04:39.821606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.521 14:04:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:41.521 14:04:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.521 14:04:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:41.521 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:41.521 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:41.521 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:41.521 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:41.779 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:41.779 14:04:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:43.685 [2024-05-15 14:04:41.818508] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.685 [2024-05-15 14:04:41.818599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.685 [2024-05-15 14:04:41.818631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.685 [2024-05-15 14:04:41.818643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f6c80 with addr=10.0.0.2, port=4420 00:26:43.685 [2024-05-15 14:04:41.818656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6c80 is same with the state(5) to be set 00:26:43.685 [2024-05-15 14:04:41.818681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6c80 (9): Bad file descriptor 00:26:43.685 [2024-05-15 14:04:41.818698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.685 [2024-05-15 14:04:41.818708] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.685 [2024-05-15 14:04:41.818718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.685 [2024-05-15 14:04:41.818752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.685 [2024-05-15 14:04:41.818762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.615 [2024-05-15 14:04:43.815571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.553 00:26:46.553 Latency(us) 00:26:46.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.553 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:46.553 Verification LBA range: start 0x0 length 0x4000 00:26:46.553 NVMe0n1 : 8.13 1430.33 5.59 15.75 0.00 88543.68 3105.72 7061253.96 00:26:46.553 =================================================================================================================== 00:26:46.553 Total : 1430.33 5.59 15.75 0.00 88543.68 3105.72 7061253.96 00:26:46.553 0 00:26:46.812 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:46.812 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:46.812 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:47.069 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:47.069 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:47.070 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:47.070 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 80763 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 80745 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 80745 ']' 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 80745 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80745 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:47.328 killing process with pid 80745 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80745' 00:26:47.328 Received shutdown signal, test time was about 9.050387 seconds 00:26:47.328 00:26:47.328 Latency(us) 00:26:47.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.328 =================================================================================================================== 00:26:47.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 80745 00:26:47.328 14:04:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 80745 00:26:47.585 14:04:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.585 [2024-05-15 14:04:46.133427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.843 14:04:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:47.843 14:04:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=80885 00:26:47.843 14:04:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 80885 /var/tmp/bdevperf.sock 00:26:47.843 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 80885 ']' 00:26:47.844 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.844 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:47.844 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.844 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:47.844 14:04:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.844 [2024-05-15 14:04:46.201166] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:26:47.844 [2024-05-15 14:04:46.201282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80885 ] 00:26:47.844 [2024-05-15 14:04:46.341318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.101 [2024-05-15 14:04:46.431528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.681 14:04:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:48.681 14:04:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:48.681 14:04:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:48.939 14:04:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:49.199 NVMe0n1 00:26:49.199 14:04:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=80903 00:26:49.199 14:04:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:49.199 14:04:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.199 Running I/O for 10 seconds... 00:26:50.134 14:04:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.396 [2024-05-15 14:04:48.815050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.396 [2024-05-15 14:04:48.815828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.396 [2024-05-15 14:04:48.815935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.396 [2024-05-15 14:04:48.815945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.815963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.815972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.815982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.815990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.397 [2024-05-15 14:04:48.816776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.397 [2024-05-15 14:04:48.816804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.397 [2024-05-15 14:04:48.816812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.816987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.816997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.398 [2024-05-15 14:04:48.817362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-05-15 14:04:48.817496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:50.398 [2024-05-15 14:04:48.817545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:50.398 [2024-05-15 14:04:48.817554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:26:50.398 [2024-05-15 14:04:48.817564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.398 [2024-05-15 14:04:48.817614] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23cb1d0 was disconnected and freed. reset controller. 00:26:50.398 [2024-05-15 14:04:48.817837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.398 [2024-05-15 14:04:48.817914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:26:50.398 [2024-05-15 14:04:48.818001] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.398 [2024-05-15 14:04:48.818063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.398 [2024-05-15 14:04:48.818093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.398 [2024-05-15 14:04:48.818106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:26:50.398 [2024-05-15 14:04:48.818116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:26:50.398 [2024-05-15 14:04:48.818132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:26:50.398 [2024-05-15 14:04:48.818146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.398 [2024-05-15 14:04:48.818155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.398 [2024-05-15 14:04:48.818165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.398 [2024-05-15 14:04:48.818184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.398 [2024-05-15 14:04:48.818194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.398 14:04:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:51.334 [2024-05-15 14:04:49.816681] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.334 [2024-05-15 14:04:49.816770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.334 [2024-05-15 14:04:49.816802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.334 [2024-05-15 14:04:49.816814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:26:51.334 [2024-05-15 14:04:49.816827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:26:51.334 [2024-05-15 14:04:49.816848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:26:51.334 [2024-05-15 14:04:49.816863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.334 [2024-05-15 14:04:49.816871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.334 [2024-05-15 14:04:49.816881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.335 [2024-05-15 14:04:49.816904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.335 [2024-05-15 14:04:49.816913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.335 14:04:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.594 [2024-05-15 14:04:50.016462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.594 14:04:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 80903 00:26:52.531 [2024-05-15 14:04:50.830618] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:00.666 00:27:00.666 Latency(us) 00:27:00.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.666 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:00.666 Verification LBA range: start 0x0 length 0x4000 00:27:00.666 NVMe0n1 : 10.01 7873.88 30.76 0.00 0.00 16229.58 2381.93 3018551.31 00:27:00.666 =================================================================================================================== 00:27:00.666 Total : 7873.88 30.76 0.00 0.00 16229.58 2381.93 3018551.31 00:27:00.666 0 00:27:00.666 14:04:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81013 00:27:00.666 14:04:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.666 14:04:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:27:00.666 Running I/O for 10 seconds... 00:27:00.666 14:04:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.666 [2024-05-15 14:04:58.979137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.666 [2024-05-15 14:04:58.979210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.666 [2024-05-15 14:04:58.979248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.666 [2024-05-15 14:04:58.979280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.666 [2024-05-15 14:04:58.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:27:00.666 [2024-05-15 14:04:58.979407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.666 [2024-05-15 14:04:58.979876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.979908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.979941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.979974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.979990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.666 [2024-05-15 14:04:58.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.666 [2024-05-15 14:04:58.980238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.667 [2024-05-15 14:04:58.980918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.980981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.980998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.667 [2024-05-15 14:04:58.981448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.667 [2024-05-15 14:04:58.981466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.981710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.981990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.668 [2024-05-15 14:04:58.982377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.668 [2024-05-15 14:04:58.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.668 [2024-05-15 14:04:58.982775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.982807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.982840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.982871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.982903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.982935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.982966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.982987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.669 [2024-05-15 14:04:58.983391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.669 [2024-05-15 14:04:58.983626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:00.669 [2024-05-15 14:04:58.983690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:00.669 [2024-05-15 14:04:58.983704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:27:00.669 [2024-05-15 14:04:58.983719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.669 [2024-05-15 14:04:58.983799] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23d9b70 was disconnected and freed. reset controller. 00:27:00.669 [2024-05-15 14:04:58.984083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.669 [2024-05-15 14:04:58.984122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:27:00.669 [2024-05-15 14:04:58.984243] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.669 [2024-05-15 14:04:58.984313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.669 [2024-05-15 14:04:58.984360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.669 [2024-05-15 14:04:58.984387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:27:00.669 [2024-05-15 14:04:58.984403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:27:00.669 [2024-05-15 14:04:58.984431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:27:00.669 [2024-05-15 14:04:58.984454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.669 [2024-05-15 14:04:58.984468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.669 [2024-05-15 14:04:58.984484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.669 [2024-05-15 14:04:58.984510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.669 [2024-05-15 14:04:58.984526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.669 14:04:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:27:01.603 [2024-05-15 14:04:59.983048] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.603 [2024-05-15 14:04:59.983149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.603 [2024-05-15 14:04:59.983193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.603 [2024-05-15 14:04:59.983211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:27:01.603 [2024-05-15 14:04:59.983231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:27:01.603 [2024-05-15 14:04:59.983270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:27:01.603 [2024-05-15 14:04:59.983294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.603 [2024-05-15 14:04:59.983306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.603 [2024-05-15 14:04:59.983320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.603 [2024-05-15 14:04:59.983354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.603 [2024-05-15 14:04:59.983370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.539 [2024-05-15 14:05:00.981892] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.539 [2024-05-15 14:05:00.982002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.539 [2024-05-15 14:05:00.982052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.539 [2024-05-15 14:05:00.982071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:27:02.539 [2024-05-15 14:05:00.982091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:27:02.539 [2024-05-15 14:05:00.982131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:27:02.539 [2024-05-15 14:05:00.982153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.539 [2024-05-15 14:05:00.982167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.539 [2024-05-15 14:05:00.982182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.539 [2024-05-15 14:05:00.982216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.539 [2024-05-15 14:05:00.982231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.474 [2024-05-15 14:05:01.982701] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.474 [2024-05-15 14:05:01.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.474 [2024-05-15 14:05:01.982869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.474 [2024-05-15 14:05:01.982887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237bc80 with addr=10.0.0.2, port=4420 00:27:03.474 [2024-05-15 14:05:01.982907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237bc80 is same with the state(5) to be set 00:27:03.474 [2024-05-15 14:05:01.983172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237bc80 (9): Bad file descriptor 00:27:03.474 [2024-05-15 14:05:01.983412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.474 [2024-05-15 14:05:01.983442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.474 [2024-05-15 14:05:01.983459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.474 [2024-05-15 14:05:01.986439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.474 [2024-05-15 14:05:01.986478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.474 14:05:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.733 [2024-05-15 14:05:02.175008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.733 14:05:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 81013 00:27:04.685 [2024-05-15 14:05:03.021194] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.952 00:27:09.952 Latency(us) 00:27:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.952 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.952 Verification LBA range: start 0x0 length 0x4000 00:27:09.952 NVMe0n1 : 10.01 6851.05 26.76 5079.25 0.00 10707.07 493.49 3018551.31 00:27:09.952 =================================================================================================================== 00:27:09.952 Total : 6851.05 26.76 5079.25 0.00 10707.07 0.00 3018551.31 00:27:09.952 0 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 80885 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 80885 ']' 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 80885 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80885 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80885' 00:27:09.952 killing process with pid 80885 00:27:09.952 Received shutdown signal, test time was about 10.000000 seconds 00:27:09.952 00:27:09.952 Latency(us) 00:27:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.952 =================================================================================================================== 00:27:09.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 80885 00:27:09.952 14:05:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 80885 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81127 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81127 /var/tmp/bdevperf.sock 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 81127 ']' 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:09.952 14:05:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.952 [2024-05-15 14:05:08.150370] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:09.952 [2024-05-15 14:05:08.150461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81127 ] 00:27:09.952 [2024-05-15 14:05:08.292980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.952 [2024-05-15 14:05:08.400926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81142 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81127 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:10.883 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:11.139 NVMe0n1 00:27:11.139 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81185 00:27:11.139 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.139 14:05:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:11.139 Running I/O for 10 seconds... 00:27:12.071 14:05:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.332 [2024-05-15 14:05:10.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.782926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.782941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.782959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.782992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.332 [2024-05-15 14:05:10.783671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.332 [2024-05-15 14:05:10.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.783967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.783986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.784975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.784990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.785007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.333 [2024-05-15 14:05:10.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.333 [2024-05-15 14:05:10.785039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.785980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.785997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.334 [2024-05-15 14:05:10.786406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.334 [2024-05-15 14:05:10.786423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.786976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.786990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.787022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.787041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.787056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.787074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.335 [2024-05-15 14:05:10.787089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.787106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd40220 is same with the state(5) to be set 00:27:12.335 [2024-05-15 14:05:10.787132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:12.335 [2024-05-15 14:05:10.787140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:12.335 [2024-05-15 14:05:10.787149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116696 len:8 PRP1 0x0 PRP2 0x0 00:27:12.335 [2024-05-15 14:05:10.787159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.335 [2024-05-15 14:05:10.787221] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd40220 was disconnected and freed. reset controller. 00:27:12.335 [2024-05-15 14:05:10.787541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.335 [2024-05-15 14:05:10.787691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4ef0 (9): Bad file descriptor 00:27:12.335 [2024-05-15 14:05:10.787865] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.335 [2024-05-15 14:05:10.787951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.335 [2024-05-15 14:05:10.788006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.335 [2024-05-15 14:05:10.788027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd4ef0 with addr=10.0.0.2, port=4420 00:27:12.335 [2024-05-15 14:05:10.788046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4ef0 is same with the state(5) to be set 00:27:12.335 [2024-05-15 14:05:10.788079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4ef0 (9): Bad file descriptor 00:27:12.335 [2024-05-15 14:05:10.788102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.335 [2024-05-15 14:05:10.788114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.335 [2024-05-15 14:05:10.788129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.335 [2024-05-15 14:05:10.788160] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.335 [2024-05-15 14:05:10.788189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.335 14:05:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 81185 00:27:14.237 [2024-05-15 14:05:12.785126] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-05-15 14:05:12.785256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-05-15 14:05:12.785305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-05-15 14:05:12.785324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd4ef0 with addr=10.0.0.2, port=4420 00:27:14.237 [2024-05-15 14:05:12.785342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4ef0 is same with the state(5) to be set 00:27:14.237 [2024-05-15 14:05:12.785381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4ef0 (9): Bad file descriptor 00:27:14.237 [2024-05-15 14:05:12.785405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.237 [2024-05-15 14:05:12.785420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.237 [2024-05-15 14:05:12.785436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.237 [2024-05-15 14:05:12.785471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.237 [2024-05-15 14:05:12.785486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.771 [2024-05-15 14:05:14.782414] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.771 [2024-05-15 14:05:14.782518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.771 [2024-05-15 14:05:14.782569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.771 [2024-05-15 14:05:14.782589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd4ef0 with addr=10.0.0.2, port=4420 00:27:16.771 [2024-05-15 14:05:14.782608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4ef0 is same with the state(5) to be set 00:27:16.771 [2024-05-15 14:05:14.782646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4ef0 (9): Bad file descriptor 00:27:16.771 [2024-05-15 14:05:14.782672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.771 [2024-05-15 14:05:14.782688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.771 [2024-05-15 14:05:14.782704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.771 [2024-05-15 14:05:14.782748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.771 [2024-05-15 14:05:14.782761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.675 [2024-05-15 14:05:16.779604] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:19.288 00:27:19.288 Latency(us) 00:27:19.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.288 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:19.288 NVMe0n1 : 8.12 2501.01 9.77 15.75 0.00 50996.07 6553.60 7061253.96 00:27:19.288 =================================================================================================================== 00:27:19.288 Total : 2501.01 9.77 15.75 0.00 50996.07 6553.60 7061253.96 00:27:19.288 0 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:19.288 Attaching 5 probes... 00:27:19.288 1142.634292: reset bdev controller NVMe0 00:27:19.288 1142.859155: reconnect bdev controller NVMe0 00:27:19.288 3140.105028: reconnect delay bdev controller NVMe0 00:27:19.288 3140.127432: reconnect bdev controller NVMe0 00:27:19.288 5137.399077: reconnect delay bdev controller NVMe0 00:27:19.288 5137.420871: reconnect bdev controller NVMe0 00:27:19.288 7134.674885: reconnect delay bdev controller NVMe0 00:27:19.288 7134.696170: reconnect bdev controller NVMe0 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 81142 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81127 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 81127 ']' 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 81127 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.288 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81127 00:27:19.548 killing process with pid 81127 00:27:19.548 Received shutdown signal, test time was about 8.212437 seconds 00:27:19.548 00:27:19.548 Latency(us) 00:27:19.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.548 =================================================================================================================== 00:27:19.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.548 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:19.548 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:19.548 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81127' 00:27:19.548 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 81127 00:27:19.548 14:05:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 81127 00:27:19.548 14:05:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.807 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.807 rmmod nvme_tcp 00:27:20.067 rmmod nvme_fabrics 00:27:20.067 rmmod nvme_keyring 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 80692 ']' 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 80692 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 80692 ']' 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 80692 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80692 00:27:20.067 killing process with pid 80692 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80692' 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 80692 00:27:20.067 [2024-05-15 14:05:18.438269] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:20.067 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 80692 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:20.326 00:27:20.326 real 0m46.229s 00:27:20.326 user 2m13.655s 00:27:20.326 sys 0m6.704s 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.326 14:05:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.326 ************************************ 00:27:20.326 END TEST nvmf_timeout 00:27:20.326 ************************************ 00:27:20.326 14:05:18 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:27:20.326 14:05:18 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:27:20.326 14:05:18 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.326 14:05:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.326 14:05:18 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:20.326 00:27:20.326 real 11m18.877s 00:27:20.326 user 26m27.459s 00:27:20.326 sys 3m27.877s 00:27:20.326 14:05:18 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.326 14:05:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.326 ************************************ 00:27:20.326 END TEST nvmf_tcp 00:27:20.326 ************************************ 00:27:20.586 14:05:18 -- spdk/autotest.sh@284 -- # [[ 1 -eq 0 ]] 00:27:20.586 14:05:18 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:20.586 14:05:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:20.586 14:05:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.586 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:27:20.586 ************************************ 00:27:20.586 START TEST nvmf_dif 00:27:20.586 ************************************ 00:27:20.586 14:05:18 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:20.586 * Looking for test storage... 00:27:20.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.586 14:05:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.586 14:05:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.586 14:05:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.586 14:05:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.586 14:05:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.586 14:05:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.586 14:05:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:20.586 14:05:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:20.586 14:05:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.586 14:05:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:20.586 14:05:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.586 14:05:19 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:20.845 Cannot find device "nvmf_tgt_br" 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@155 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.845 Cannot find device "nvmf_tgt_br2" 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@156 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:20.845 Cannot find device "nvmf_tgt_br" 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@158 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:20.845 Cannot find device "nvmf_tgt_br2" 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@159 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.845 14:05:19 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:21.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:27:21.104 00:27:21.104 --- 10.0.0.2 ping statistics --- 00:27:21.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.104 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:21.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:21.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:27:21.104 00:27:21.104 --- 10.0.0.3 ping statistics --- 00:27:21.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.104 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:21.104 14:05:19 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:21.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:27:21.104 00:27:21.104 --- 10.0.0.1 ping statistics --- 00:27:21.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.105 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:21.105 14:05:19 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.105 14:05:19 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:27:21.105 14:05:19 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:21.105 14:05:19 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:21.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.710 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:21.710 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.710 14:05:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:21.710 14:05:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=81622 00:27:21.710 14:05:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 81622 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 81622 ']' 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.710 14:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:21.710 [2024-05-15 14:05:20.263060] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:27:21.710 [2024-05-15 14:05:20.263147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.968 [2024-05-15 14:05:20.411940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.968 [2024-05-15 14:05:20.518159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.968 [2024-05-15 14:05:20.518229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.968 [2024-05-15 14:05:20.518239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.968 [2024-05-15 14:05:20.518248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.968 [2024-05-15 14:05:20.518255] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.968 [2024-05-15 14:05:20.518283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:27:22.942 14:05:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 14:05:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.942 14:05:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:22.942 14:05:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 [2024-05-15 14:05:21.205364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.942 14:05:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 ************************************ 00:27:22.942 START TEST fio_dif_1_default 00:27:22.942 ************************************ 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 bdev_null0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.942 [2024-05-15 14:05:21.269258] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:22.942 [2024-05-15 14:05:21.269495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.942 { 00:27:22.942 "params": { 00:27:22.942 "name": "Nvme$subsystem", 00:27:22.942 "trtype": "$TEST_TRANSPORT", 00:27:22.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.942 "adrfam": "ipv4", 00:27:22.942 "trsvcid": "$NVMF_PORT", 00:27:22.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.942 "hdgst": ${hdgst:-false}, 00:27:22.942 "ddgst": ${ddgst:-false} 00:27:22.942 }, 00:27:22.942 "method": "bdev_nvme_attach_controller" 00:27:22.942 } 00:27:22.942 EOF 00:27:22.942 )") 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:22.942 "params": { 00:27:22.942 "name": "Nvme0", 00:27:22.942 "trtype": "tcp", 00:27:22.942 "traddr": "10.0.0.2", 00:27:22.942 "adrfam": "ipv4", 00:27:22.942 "trsvcid": "4420", 00:27:22.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:22.942 "hdgst": false, 00:27:22.942 "ddgst": false 00:27:22.942 }, 00:27:22.942 "method": "bdev_nvme_attach_controller" 00:27:22.942 }' 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:22.942 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:22.943 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:22.943 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:22.943 14:05:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.943 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:22.943 fio-3.35 00:27:22.943 Starting 1 thread 00:27:35.226 00:27:35.226 filename0: (groupid=0, jobs=1): err= 0: pid=81689: Wed May 15 14:05:32 2024 00:27:35.226 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(437MiB/10001msec) 00:27:35.226 slat (usec): min=5, max=141, avg= 6.70, stdev= 1.66 00:27:35.226 clat (usec): min=289, max=3911, avg=339.11, stdev=40.97 00:27:35.226 lat (usec): min=295, max=3948, avg=345.81, stdev=41.44 00:27:35.226 clat percentiles (usec): 00:27:35.226 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:27:35.226 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:27:35.226 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 388], 00:27:35.226 | 99.00th=[ 429], 99.50th=[ 494], 99.90th=[ 668], 99.95th=[ 963], 00:27:35.226 | 99.99th=[ 1303] 00:27:35.226 bw ( KiB/s): min=41280, max=47232, per=100.00%, avg=44930.58, stdev=1681.28, samples=19 00:27:35.226 iops : min=10320, max=11808, avg=11232.63, stdev=420.32, samples=19 00:27:35.226 lat (usec) : 500=99.53%, 750=0.41%, 1000=0.04% 00:27:35.226 lat (msec) : 2=0.02%, 4=0.01% 00:27:35.226 cpu : usr=81.83%, sys=16.74%, ctx=23, majf=0, minf=0 00:27:35.226 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.226 issued rwts: total=111840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.226 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:35.226 00:27:35.226 Run status group 0 (all jobs): 00:27:35.226 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=437MiB (458MB), run=10001-10001msec 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 00:27:35.226 real 0m10.983s 00:27:35.226 user 0m8.804s 00:27:35.226 sys 0m1.958s 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 ************************************ 00:27:35.226 END TEST fio_dif_1_default 00:27:35.226 ************************************ 00:27:35.226 14:05:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:35.226 14:05:32 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.226 14:05:32 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 ************************************ 00:27:35.226 START TEST fio_dif_1_multi_subsystems 00:27:35.226 ************************************ 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 bdev_null0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 [2024-05-15 14:05:32.322167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 bdev_null1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.227 { 00:27:35.227 "params": { 00:27:35.227 "name": "Nvme$subsystem", 00:27:35.227 "trtype": "$TEST_TRANSPORT", 00:27:35.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.227 "adrfam": "ipv4", 00:27:35.227 "trsvcid": "$NVMF_PORT", 00:27:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.227 "hdgst": ${hdgst:-false}, 00:27:35.227 "ddgst": ${ddgst:-false} 00:27:35.227 }, 00:27:35.227 "method": "bdev_nvme_attach_controller" 00:27:35.227 } 00:27:35.227 EOF 00:27:35.227 )") 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.227 { 00:27:35.227 "params": { 00:27:35.227 "name": "Nvme$subsystem", 00:27:35.227 "trtype": "$TEST_TRANSPORT", 00:27:35.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.227 "adrfam": "ipv4", 00:27:35.227 "trsvcid": "$NVMF_PORT", 00:27:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.227 "hdgst": ${hdgst:-false}, 00:27:35.227 "ddgst": ${ddgst:-false} 00:27:35.227 }, 00:27:35.227 "method": "bdev_nvme_attach_controller" 00:27:35.227 } 00:27:35.227 EOF 00:27:35.227 )") 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:35.227 "params": { 00:27:35.227 "name": "Nvme0", 00:27:35.227 "trtype": "tcp", 00:27:35.227 "traddr": "10.0.0.2", 00:27:35.227 "adrfam": "ipv4", 00:27:35.227 "trsvcid": "4420", 00:27:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:35.227 "hdgst": false, 00:27:35.227 "ddgst": false 00:27:35.227 }, 00:27:35.227 "method": "bdev_nvme_attach_controller" 00:27:35.227 },{ 00:27:35.227 "params": { 00:27:35.227 "name": "Nvme1", 00:27:35.227 "trtype": "tcp", 00:27:35.227 "traddr": "10.0.0.2", 00:27:35.227 "adrfam": "ipv4", 00:27:35.227 "trsvcid": "4420", 00:27:35.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.227 "hdgst": false, 00:27:35.227 "ddgst": false 00:27:35.227 }, 00:27:35.227 "method": "bdev_nvme_attach_controller" 00:27:35.227 }' 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:35.227 14:05:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.227 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:35.227 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:35.227 fio-3.35 00:27:35.227 Starting 2 threads 00:27:45.204 00:27:45.204 filename0: (groupid=0, jobs=1): err= 0: pid=81847: Wed May 15 14:05:43 2024 00:27:45.204 read: IOPS=6436, BW=25.1MiB/s (26.4MB/s)(251MiB/10001msec) 00:27:45.204 slat (nsec): min=5796, max=62688, avg=10846.01, stdev=2622.98 00:27:45.204 clat (usec): min=404, max=1155, avg=592.88, stdev=25.51 00:27:45.204 lat (usec): min=448, max=1192, avg=603.72, stdev=26.29 00:27:45.204 clat percentiles (usec): 00:27:45.204 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 578], 00:27:45.204 | 30.00th=[ 586], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 00:27:45.204 | 70.00th=[ 611], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 627], 00:27:45.204 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 734], 00:27:45.204 | 99.99th=[ 832] 00:27:45.204 bw ( KiB/s): min=25536, max=26016, per=50.03%, avg=25766.74, stdev=117.61, samples=19 00:27:45.204 iops : min= 6384, max= 6504, avg=6441.68, stdev=29.40, samples=19 00:27:45.204 lat (usec) : 500=0.02%, 750=99.95%, 1000=0.03% 00:27:45.204 lat (msec) : 2=0.01% 00:27:45.204 cpu : usr=88.69%, sys=10.32%, ctx=13, majf=0, minf=0 00:27:45.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.204 issued rwts: total=64372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:45.204 filename1: (groupid=0, jobs=1): err= 0: pid=81848: Wed May 15 14:05:43 2024 00:27:45.204 read: IOPS=6438, BW=25.2MiB/s (26.4MB/s)(252MiB/10001msec) 00:27:45.204 slat (nsec): min=5783, max=39498, avg=10966.91, stdev=2806.54 00:27:45.204 clat (usec): min=312, max=1176, avg=591.90, stdev=21.69 00:27:45.204 lat (usec): min=318, max=1212, avg=602.87, stdev=22.07 00:27:45.204 clat percentiles (usec): 00:27:45.204 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:27:45.204 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 594], 00:27:45.204 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 627], 00:27:45.204 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 701], 00:27:45.204 | 99.99th=[ 750] 00:27:45.204 bw ( KiB/s): min=25536, max=26112, per=50.05%, avg=25775.16, stdev=135.92, samples=19 00:27:45.204 iops : min= 6384, max= 6528, avg=6443.79, stdev=33.98, samples=19 00:27:45.204 lat (usec) : 500=0.03%, 750=99.96%, 1000=0.01% 00:27:45.204 lat (msec) : 2=0.01% 00:27:45.204 cpu : usr=88.92%, sys=10.06%, ctx=14, majf=0, minf=0 00:27:45.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.204 issued rwts: total=64392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:45.204 00:27:45.204 Run status group 0 (all jobs): 00:27:45.204 READ: bw=50.3MiB/s (52.7MB/s), 25.1MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=503MiB (527MB), run=10001-10001msec 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 00:27:45.204 real 0m11.128s 00:27:45.204 user 0m18.526s 00:27:45.204 sys 0m2.344s 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 ************************************ 00:27:45.204 END TEST fio_dif_1_multi_subsystems 00:27:45.204 ************************************ 00:27:45.204 14:05:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:45.204 14:05:43 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:45.204 14:05:43 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 ************************************ 00:27:45.204 START TEST fio_dif_rand_params 00:27:45.204 ************************************ 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 bdev_null0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.204 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 [2024-05-15 14:05:43.522273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.205 { 00:27:45.205 "params": { 00:27:45.205 "name": "Nvme$subsystem", 00:27:45.205 "trtype": "$TEST_TRANSPORT", 00:27:45.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.205 "adrfam": "ipv4", 00:27:45.205 "trsvcid": "$NVMF_PORT", 00:27:45.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.205 "hdgst": ${hdgst:-false}, 00:27:45.205 "ddgst": ${ddgst:-false} 00:27:45.205 }, 00:27:45.205 "method": "bdev_nvme_attach_controller" 00:27:45.205 } 00:27:45.205 EOF 00:27:45.205 )") 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:45.205 "params": { 00:27:45.205 "name": "Nvme0", 00:27:45.205 "trtype": "tcp", 00:27:45.205 "traddr": "10.0.0.2", 00:27:45.205 "adrfam": "ipv4", 00:27:45.205 "trsvcid": "4420", 00:27:45.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.205 "hdgst": false, 00:27:45.205 "ddgst": false 00:27:45.205 }, 00:27:45.205 "method": "bdev_nvme_attach_controller" 00:27:45.205 }' 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:45.205 14:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.205 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:45.205 ... 00:27:45.205 fio-3.35 00:27:45.205 Starting 3 threads 00:27:51.774 00:27:51.774 filename0: (groupid=0, jobs=1): err= 0: pid=82005: Wed May 15 14:05:49 2024 00:27:51.774 read: IOPS=335, BW=41.9MiB/s (44.0MB/s)(210MiB/5007msec) 00:27:51.774 slat (nsec): min=5813, max=37200, avg=13609.22, stdev=4259.76 00:27:51.774 clat (usec): min=6612, max=9570, avg=8910.38, stdev=146.04 00:27:51.774 lat (usec): min=6622, max=9591, avg=8923.99, stdev=146.33 00:27:51.774 clat percentiles (usec): 00:27:51.774 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8848], 00:27:51.774 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 00:27:51.774 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 9110], 00:27:51.774 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[ 9503], 99.95th=[ 9634], 00:27:51.774 | 99.99th=[ 9634] 00:27:51.774 bw ( KiB/s): min=42240, max=43008, per=33.36%, avg=42931.20, stdev=242.86, samples=10 00:27:51.774 iops : min= 330, max= 336, avg=335.40, stdev= 1.90, samples=10 00:27:51.774 lat (msec) : 10=100.00% 00:27:51.774 cpu : usr=88.83%, sys=10.71%, ctx=78, majf=0, minf=9 00:27:51.774 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.774 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.774 filename0: (groupid=0, jobs=1): err= 0: pid=82006: Wed May 15 14:05:49 2024 00:27:51.774 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(210MiB/5003msec) 00:27:51.774 slat (nsec): min=6051, max=38508, avg=13993.47, stdev=3892.61 00:27:51.774 clat (usec): min=8814, max=9750, avg=8918.22, stdev=84.77 00:27:51.774 lat (usec): min=8826, max=9775, avg=8932.21, stdev=85.26 00:27:51.774 clat percentiles (usec): 00:27:51.774 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8848], 00:27:51.774 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 00:27:51.774 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 9110], 00:27:51.774 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[ 9765], 99.95th=[ 9765], 00:27:51.774 | 99.99th=[ 9765] 00:27:51.774 bw ( KiB/s): min=42240, max=43008, per=33.31%, avg=42862.80, stdev=306.75, samples=10 00:27:51.774 iops : min= 330, max= 336, avg=334.80, stdev= 2.53, samples=10 00:27:51.774 lat (msec) : 10=100.00% 00:27:51.774 cpu : usr=89.70%, sys=9.90%, ctx=10, majf=0, minf=0 00:27:51.774 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.774 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.774 filename0: (groupid=0, jobs=1): err= 0: pid=82007: Wed May 15 14:05:49 2024 00:27:51.774 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(210MiB/5003msec) 00:27:51.774 slat (nsec): min=5961, max=34654, avg=14350.55, stdev=3562.62 00:27:51.774 clat (usec): min=8775, max=10274, avg=8918.32, stdev=98.49 00:27:51.774 lat (usec): min=8787, max=10298, avg=8932.67, stdev=99.23 00:27:51.774 clat percentiles (usec): 00:27:51.774 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8848], 00:27:51.774 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 00:27:51.774 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 9110], 00:27:51.774 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[10290], 99.95th=[10290], 00:27:51.774 | 99.99th=[10290] 00:27:51.774 bw ( KiB/s): min=42240, max=43008, per=33.30%, avg=42854.40, stdev=323.82, samples=10 00:27:51.774 iops : min= 330, max= 336, avg=334.80, stdev= 2.53, samples=10 00:27:51.774 lat (msec) : 10=99.82%, 20=0.18% 00:27:51.774 cpu : usr=89.34%, sys=10.24%, ctx=3, majf=0, minf=9 00:27:51.774 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.774 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.774 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.774 00:27:51.774 Run status group 0 (all jobs): 00:27:51.774 READ: bw=126MiB/s (132MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-44.0MB/s), io=629MiB (660MB), run=5003-5007msec 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.774 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 bdev_null0 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 [2024-05-15 14:05:49.497964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 bdev_null1 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 bdev_null2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.775 { 00:27:51.775 "params": { 00:27:51.775 "name": "Nvme$subsystem", 00:27:51.775 "trtype": "$TEST_TRANSPORT", 00:27:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.775 "adrfam": "ipv4", 00:27:51.775 "trsvcid": "$NVMF_PORT", 00:27:51.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.775 "hdgst": ${hdgst:-false}, 00:27:51.775 "ddgst": ${ddgst:-false} 00:27:51.775 }, 00:27:51.775 "method": "bdev_nvme_attach_controller" 00:27:51.775 } 00:27:51.775 EOF 00:27:51.775 )") 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.775 { 00:27:51.775 "params": { 00:27:51.775 "name": "Nvme$subsystem", 00:27:51.775 "trtype": "$TEST_TRANSPORT", 00:27:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.775 "adrfam": "ipv4", 00:27:51.775 "trsvcid": "$NVMF_PORT", 00:27:51.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.775 "hdgst": ${hdgst:-false}, 00:27:51.775 "ddgst": ${ddgst:-false} 00:27:51.775 }, 00:27:51.775 "method": "bdev_nvme_attach_controller" 00:27:51.775 } 00:27:51.775 EOF 00:27:51.775 )") 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.775 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.775 { 00:27:51.775 "params": { 00:27:51.775 "name": "Nvme$subsystem", 00:27:51.775 "trtype": "$TEST_TRANSPORT", 00:27:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.775 "adrfam": "ipv4", 00:27:51.776 "trsvcid": "$NVMF_PORT", 00:27:51.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.776 "hdgst": ${hdgst:-false}, 00:27:51.776 "ddgst": ${ddgst:-false} 00:27:51.776 }, 00:27:51.776 "method": "bdev_nvme_attach_controller" 00:27:51.776 } 00:27:51.776 EOF 00:27:51.776 )") 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.776 "params": { 00:27:51.776 "name": "Nvme0", 00:27:51.776 "trtype": "tcp", 00:27:51.776 "traddr": "10.0.0.2", 00:27:51.776 "adrfam": "ipv4", 00:27:51.776 "trsvcid": "4420", 00:27:51.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.776 "hdgst": false, 00:27:51.776 "ddgst": false 00:27:51.776 }, 00:27:51.776 "method": "bdev_nvme_attach_controller" 00:27:51.776 },{ 00:27:51.776 "params": { 00:27:51.776 "name": "Nvme1", 00:27:51.776 "trtype": "tcp", 00:27:51.776 "traddr": "10.0.0.2", 00:27:51.776 "adrfam": "ipv4", 00:27:51.776 "trsvcid": "4420", 00:27:51.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.776 "hdgst": false, 00:27:51.776 "ddgst": false 00:27:51.776 }, 00:27:51.776 "method": "bdev_nvme_attach_controller" 00:27:51.776 },{ 00:27:51.776 "params": { 00:27:51.776 "name": "Nvme2", 00:27:51.776 "trtype": "tcp", 00:27:51.776 "traddr": "10.0.0.2", 00:27:51.776 "adrfam": "ipv4", 00:27:51.776 "trsvcid": "4420", 00:27:51.776 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.776 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.776 "hdgst": false, 00:27:51.776 "ddgst": false 00:27:51.776 }, 00:27:51.776 "method": "bdev_nvme_attach_controller" 00:27:51.776 }' 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:51.776 14:05:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.776 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:51.776 ... 00:27:51.776 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:51.776 ... 00:27:51.776 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:51.776 ... 00:27:51.776 fio-3.35 00:27:51.776 Starting 24 threads 00:28:04.080 00:28:04.080 filename0: (groupid=0, jobs=1): err= 0: pid=82107: Wed May 15 14:06:00 2024 00:28:04.080 read: IOPS=284, BW=1138KiB/s (1165kB/s)(11.1MiB/10005msec) 00:28:04.080 slat (usec): min=6, max=5802, avg=17.19, stdev=128.35 00:28:04.080 clat (msec): min=6, max=120, avg=56.16, stdev=16.90 00:28:04.080 lat (msec): min=6, max=120, avg=56.17, stdev=16.89 00:28:04.080 clat percentiles (msec): 00:28:04.080 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 41], 00:28:04.080 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 58], 00:28:04.080 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 85], 00:28:04.080 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 112], 99.95th=[ 121], 00:28:04.080 | 99.99th=[ 121] 00:28:04.080 bw ( KiB/s): min= 752, max= 1304, per=4.17%, avg=1124.21, stdev=168.05, samples=19 00:28:04.080 iops : min= 188, max= 326, avg=281.05, stdev=42.01, samples=19 00:28:04.080 lat (msec) : 10=0.56%, 20=0.46%, 50=39.18%, 100=59.45%, 250=0.35% 00:28:04.080 cpu : usr=41.62%, sys=2.83%, ctx=1246, majf=0, minf=9 00:28:04.080 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:04.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 complete : 0=0.0%, 4=88.6%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.080 filename0: (groupid=0, jobs=1): err= 0: pid=82108: Wed May 15 14:06:00 2024 00:28:04.080 read: IOPS=268, BW=1075KiB/s (1100kB/s)(10.5MiB/10025msec) 00:28:04.080 slat (usec): min=3, max=8021, avg=16.43, stdev=154.39 00:28:04.080 clat (msec): min=24, max=119, avg=59.43, stdev=17.19 00:28:04.080 lat (msec): min=24, max=119, avg=59.45, stdev=17.20 00:28:04.080 clat percentiles (msec): 00:28:04.080 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:28:04.080 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:28:04.080 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:28:04.080 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 116], 00:28:04.080 | 99.99th=[ 121] 00:28:04.080 bw ( KiB/s): min= 656, max= 1208, per=3.98%, avg=1072.95, stdev=154.17, samples=20 00:28:04.080 iops : min= 164, max= 302, avg=268.20, stdev=38.56, samples=20 00:28:04.080 lat (msec) : 50=36.98%, 100=60.79%, 250=2.23% 00:28:04.080 cpu : usr=31.06%, sys=2.11%, ctx=854, majf=0, minf=9 00:28:04.080 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=78.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:28:04.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.080 filename0: (groupid=0, jobs=1): err= 0: pid=82109: Wed May 15 14:06:00 2024 00:28:04.080 read: IOPS=280, BW=1121KiB/s (1148kB/s)(11.0MiB/10019msec) 00:28:04.080 slat (usec): min=2, max=8015, avg=17.65, stdev=165.79 00:28:04.080 clat (msec): min=24, max=119, avg=56.96, stdev=14.77 00:28:04.080 lat (msec): min=24, max=119, avg=56.97, stdev=14.77 00:28:04.080 clat percentiles (msec): 00:28:04.080 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 47], 00:28:04.080 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 00:28:04.080 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 85], 00:28:04.080 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 108], 00:28:04.080 | 99.99th=[ 121] 00:28:04.080 bw ( KiB/s): min= 896, max= 1338, per=4.15%, avg=1119.70, stdev=105.60, samples=20 00:28:04.080 iops : min= 224, max= 334, avg=279.90, stdev=26.35, samples=20 00:28:04.080 lat (msec) : 50=38.23%, 100=61.45%, 250=0.32% 00:28:04.080 cpu : usr=35.14%, sys=2.50%, ctx=1264, majf=0, minf=9 00:28:04.080 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.4%, 16=17.1%, 32=0.0%, >=64=0.0% 00:28:04.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 issued rwts: total=2809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.080 filename0: (groupid=0, jobs=1): err= 0: pid=82110: Wed May 15 14:06:00 2024 00:28:04.080 read: IOPS=286, BW=1148KiB/s (1175kB/s)(11.2MiB/10008msec) 00:28:04.080 slat (usec): min=6, max=8031, avg=19.75, stdev=211.45 00:28:04.080 clat (msec): min=12, max=116, avg=55.66, stdev=15.82 00:28:04.080 lat (msec): min=12, max=116, avg=55.68, stdev=15.82 00:28:04.080 clat percentiles (msec): 00:28:04.080 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 00:28:04.080 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.080 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 85], 00:28:04.080 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 111], 99.95th=[ 116], 00:28:04.080 | 99.99th=[ 116] 00:28:04.080 bw ( KiB/s): min= 848, max= 1304, per=4.23%, avg=1139.37, stdev=130.81, samples=19 00:28:04.080 iops : min= 212, max= 326, avg=284.84, stdev=32.70, samples=19 00:28:04.080 lat (msec) : 20=0.56%, 50=41.50%, 100=57.56%, 250=0.38% 00:28:04.080 cpu : usr=31.68%, sys=2.06%, ctx=931, majf=0, minf=9 00:28:04.080 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:04.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.080 issued rwts: total=2872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.080 filename0: (groupid=0, jobs=1): err= 0: pid=82111: Wed May 15 14:06:00 2024 00:28:04.080 read: IOPS=273, BW=1095KiB/s (1121kB/s)(10.7MiB/10037msec) 00:28:04.080 slat (usec): min=5, max=8020, avg=15.55, stdev=152.84 00:28:04.080 clat (msec): min=2, max=107, avg=58.34, stdev=17.66 00:28:04.080 lat (msec): min=2, max=107, avg=58.35, stdev=17.66 00:28:04.080 clat percentiles (msec): 00:28:04.080 | 1.00th=[ 5], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 48], 00:28:04.080 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:28:04.080 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 88], 00:28:04.081 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:28:04.081 | 99.99th=[ 108] 00:28:04.081 bw ( KiB/s): min= 704, max= 1904, per=4.05%, avg=1092.00, stdev=231.46, samples=20 00:28:04.081 iops : min= 176, max= 476, avg=273.00, stdev=57.87, samples=20 00:28:04.081 lat (msec) : 4=0.58%, 10=1.75%, 20=0.51%, 50=29.48%, 100=66.85% 00:28:04.081 lat (msec) : 250=0.84% 00:28:04.081 cpu : usr=34.18%, sys=2.52%, ctx=1158, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=79.0%, 16=17.1%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=89.0%, 8=10.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename0: (groupid=0, jobs=1): err= 0: pid=82112: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=276, BW=1104KiB/s (1131kB/s)(10.8MiB/10026msec) 00:28:04.081 slat (usec): min=4, max=8026, avg=27.77, stdev=340.68 00:28:04.081 clat (msec): min=23, max=111, avg=57.77, stdev=15.00 00:28:04.081 lat (msec): min=23, max=111, avg=57.80, stdev=14.99 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 47], 00:28:04.081 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 60], 00:28:04.081 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 85], 00:28:04.081 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 112], 99.95th=[ 112], 00:28:04.081 | 99.99th=[ 112] 00:28:04.081 bw ( KiB/s): min= 896, max= 1392, per=4.09%, avg=1103.25, stdev=121.29, samples=20 00:28:04.081 iops : min= 224, max= 348, avg=275.80, stdev=30.32, samples=20 00:28:04.081 lat (msec) : 50=35.98%, 100=63.69%, 250=0.33% 00:28:04.081 cpu : usr=31.09%, sys=2.21%, ctx=923, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=81.9%, 16=17.3%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=88.2%, 8=11.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename0: (groupid=0, jobs=1): err= 0: pid=82113: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=280, BW=1124KiB/s (1151kB/s)(11.0MiB/10024msec) 00:28:04.081 slat (usec): min=2, max=5023, avg=25.08, stdev=209.30 00:28:04.081 clat (msec): min=25, max=131, avg=56.77, stdev=17.06 00:28:04.081 lat (msec): min=25, max=131, avg=56.80, stdev=17.06 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:28:04.081 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.081 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 88], 00:28:04.081 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 132], 00:28:04.081 | 99.99th=[ 132] 00:28:04.081 bw ( KiB/s): min= 768, max= 1352, per=4.16%, avg=1122.80, stdev=170.17, samples=20 00:28:04.081 iops : min= 192, max= 338, avg=280.70, stdev=42.54, samples=20 00:28:04.081 lat (msec) : 50=37.04%, 100=60.48%, 250=2.49% 00:28:04.081 cpu : usr=42.63%, sys=2.94%, ctx=1476, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename0: (groupid=0, jobs=1): err= 0: pid=82114: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=284, BW=1137KiB/s (1165kB/s)(11.1MiB/10008msec) 00:28:04.081 slat (usec): min=2, max=8020, avg=16.48, stdev=150.17 00:28:04.081 clat (msec): min=16, max=135, avg=56.18, stdev=15.91 00:28:04.081 lat (msec): min=16, max=135, avg=56.19, stdev=15.91 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 44], 00:28:04.081 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 59], 00:28:04.081 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 84], 00:28:04.081 | 99.00th=[ 96], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 136], 00:28:04.081 | 99.99th=[ 136] 00:28:04.081 bw ( KiB/s): min= 784, max= 1312, per=4.21%, avg=1134.80, stdev=137.08, samples=20 00:28:04.081 iops : min= 196, max= 328, avg=283.70, stdev=34.27, samples=20 00:28:04.081 lat (msec) : 20=0.21%, 50=42.38%, 100=56.68%, 250=0.74% 00:28:04.081 cpu : usr=31.25%, sys=1.96%, ctx=918, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename1: (groupid=0, jobs=1): err= 0: pid=82115: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10040msec) 00:28:04.081 slat (usec): min=4, max=8024, avg=23.56, stdev=281.91 00:28:04.081 clat (usec): min=1857, max=120118, avg=56788.48, stdev=18400.85 00:28:04.081 lat (usec): min=1870, max=120132, avg=56812.04, stdev=18398.69 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 3], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 47], 00:28:04.081 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:28:04.081 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 86], 00:28:04.081 | 99.00th=[ 97], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 121], 00:28:04.081 | 99.99th=[ 121] 00:28:04.081 bw ( KiB/s): min= 768, max= 2055, per=4.16%, avg=1121.95, stdev=250.23, samples=20 00:28:04.081 iops : min= 192, max= 513, avg=280.45, stdev=62.41, samples=20 00:28:04.081 lat (msec) : 2=0.07%, 4=1.95%, 10=1.38%, 50=34.50%, 100=61.28% 00:28:04.081 lat (msec) : 250=0.81% 00:28:04.081 cpu : usr=33.54%, sys=2.42%, ctx=947, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.1%, 16=17.3%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=88.5%, 8=11.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename1: (groupid=0, jobs=1): err= 0: pid=82116: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=280, BW=1121KiB/s (1148kB/s)(11.0MiB/10025msec) 00:28:04.081 slat (usec): min=3, max=8020, avg=25.00, stdev=262.85 00:28:04.081 clat (msec): min=25, max=119, avg=56.94, stdev=15.79 00:28:04.081 lat (msec): min=25, max=119, avg=56.97, stdev=15.80 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:28:04.081 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.081 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 86], 00:28:04.081 | 99.00th=[ 100], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:28:04.081 | 99.99th=[ 120] 00:28:04.081 bw ( KiB/s): min= 760, max= 1280, per=4.15%, avg=1118.15, stdev=145.82, samples=20 00:28:04.081 iops : min= 190, max= 320, avg=279.50, stdev=36.45, samples=20 00:28:04.081 lat (msec) : 50=37.37%, 100=61.64%, 250=1.00% 00:28:04.081 cpu : usr=42.73%, sys=3.10%, ctx=1104, majf=0, minf=9 00:28:04.081 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=80.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:04.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.081 issued rwts: total=2810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.081 filename1: (groupid=0, jobs=1): err= 0: pid=82117: Wed May 15 14:06:00 2024 00:28:04.081 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10024msec) 00:28:04.081 slat (usec): min=6, max=8027, avg=21.02, stdev=204.64 00:28:04.081 clat (msec): min=12, max=119, avg=59.04, stdev=16.63 00:28:04.081 lat (msec): min=12, max=119, avg=59.06, stdev=16.64 00:28:04.081 clat percentiles (msec): 00:28:04.081 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:28:04.081 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:28:04.082 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 88], 00:28:04.082 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 117], 99.95th=[ 117], 00:28:04.082 | 99.99th=[ 121] 00:28:04.082 bw ( KiB/s): min= 768, max= 1285, per=4.00%, avg=1078.65, stdev=163.22, samples=20 00:28:04.082 iops : min= 192, max= 321, avg=269.65, stdev=40.79, samples=20 00:28:04.082 lat (msec) : 20=0.59%, 50=30.69%, 100=66.62%, 250=2.10% 00:28:04.082 cpu : usr=39.58%, sys=2.95%, ctx=1312, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename1: (groupid=0, jobs=1): err= 0: pid=82118: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10019msec) 00:28:04.082 slat (usec): min=3, max=9020, avg=26.16, stdev=318.73 00:28:04.082 clat (msec): min=21, max=110, avg=57.14, stdev=17.76 00:28:04.082 lat (msec): min=21, max=110, avg=57.17, stdev=17.75 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:28:04.082 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 58], 00:28:04.082 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 91], 00:28:04.082 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:28:04.082 | 99.99th=[ 111] 00:28:04.082 bw ( KiB/s): min= 768, max= 1304, per=4.14%, avg=1115.60, stdev=179.72, samples=20 00:28:04.082 iops : min= 192, max= 326, avg=278.90, stdev=44.93, samples=20 00:28:04.082 lat (msec) : 50=39.35%, 100=58.83%, 250=1.82% 00:28:04.082 cpu : usr=36.80%, sys=3.06%, ctx=1286, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename1: (groupid=0, jobs=1): err= 0: pid=82119: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=278, BW=1115KiB/s (1141kB/s)(10.9MiB/10023msec) 00:28:04.082 slat (usec): min=3, max=4025, avg=17.13, stdev=123.72 00:28:04.082 clat (msec): min=24, max=120, avg=57.31, stdev=15.20 00:28:04.082 lat (msec): min=24, max=120, avg=57.33, stdev=15.20 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:28:04.082 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.082 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 85], 00:28:04.082 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 111], 99.95th=[ 111], 00:28:04.082 | 99.99th=[ 122] 00:28:04.082 bw ( KiB/s): min= 784, max= 1392, per=4.13%, avg=1112.80, stdev=140.83, samples=20 00:28:04.082 iops : min= 196, max= 348, avg=278.20, stdev=35.21, samples=20 00:28:04.082 lat (msec) : 50=35.45%, 100=64.16%, 250=0.39% 00:28:04.082 cpu : usr=41.15%, sys=3.31%, ctx=1392, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=88.6%, 8=10.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename1: (groupid=0, jobs=1): err= 0: pid=82120: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=287, BW=1152KiB/s (1179kB/s)(11.3MiB/10035msec) 00:28:04.082 slat (usec): min=4, max=4042, avg=17.32, stdev=121.35 00:28:04.082 clat (usec): min=1934, max=108199, avg=55447.11, stdev=17034.16 00:28:04.082 lat (usec): min=1947, max=108212, avg=55464.43, stdev=17036.98 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 41], 00:28:04.082 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.082 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 85], 00:28:04.082 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 104], 99.95th=[ 109], 00:28:04.082 | 99.99th=[ 109] 00:28:04.082 bw ( KiB/s): min= 864, max= 1792, per=4.26%, avg=1149.20, stdev=191.16, samples=20 00:28:04.082 iops : min= 216, max= 448, avg=287.30, stdev=47.79, samples=20 00:28:04.082 lat (msec) : 2=0.07%, 4=0.62%, 10=1.52%, 50=37.38%, 100=60.09% 00:28:04.082 lat (msec) : 250=0.31% 00:28:04.082 cpu : usr=42.57%, sys=3.17%, ctx=1091, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename1: (groupid=0, jobs=1): err= 0: pid=82121: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=291, BW=1165KiB/s (1193kB/s)(11.4MiB/10007msec) 00:28:04.082 slat (usec): min=4, max=8018, avg=21.23, stdev=196.20 00:28:04.082 clat (msec): min=8, max=118, avg=54.84, stdev=16.19 00:28:04.082 lat (msec): min=8, max=118, avg=54.86, stdev=16.19 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 41], 00:28:04.082 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 58], 00:28:04.082 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 85], 00:28:04.082 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:28:04.082 | 99.99th=[ 120] 00:28:04.082 bw ( KiB/s): min= 872, max= 1280, per=4.28%, avg=1154.11, stdev=115.53, samples=19 00:28:04.082 iops : min= 218, max= 320, avg=288.53, stdev=28.88, samples=19 00:28:04.082 lat (msec) : 10=0.45%, 20=0.31%, 50=40.12%, 100=58.37%, 250=0.75% 00:28:04.082 cpu : usr=46.53%, sys=3.14%, ctx=1123, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=0.3%, 4=1.6%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename1: (groupid=0, jobs=1): err= 0: pid=82122: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=281, BW=1124KiB/s (1151kB/s)(11.0MiB/10021msec) 00:28:04.082 slat (usec): min=2, max=7097, avg=17.66, stdev=153.44 00:28:04.082 clat (msec): min=23, max=110, avg=56.80, stdev=16.02 00:28:04.082 lat (msec): min=23, max=110, avg=56.82, stdev=16.02 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 42], 00:28:04.082 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:28:04.082 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 88], 00:28:04.082 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 111], 00:28:04.082 | 99.99th=[ 111] 00:28:04.082 bw ( KiB/s): min= 784, max= 1304, per=4.16%, avg=1122.80, stdev=135.67, samples=20 00:28:04.082 iops : min= 196, max= 326, avg=280.70, stdev=33.92, samples=20 00:28:04.082 lat (msec) : 50=36.60%, 100=62.23%, 250=1.17% 00:28:04.082 cpu : usr=38.37%, sys=2.87%, ctx=1279, majf=0, minf=9 00:28:04.082 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:04.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.082 issued rwts: total=2817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.082 filename2: (groupid=0, jobs=1): err= 0: pid=82123: Wed May 15 14:06:00 2024 00:28:04.082 read: IOPS=284, BW=1139KiB/s (1167kB/s)(11.1MiB/10001msec) 00:28:04.082 slat (nsec): min=6178, max=59285, avg=13997.82, stdev=5218.22 00:28:04.082 clat (msec): min=2, max=119, avg=56.10, stdev=17.27 00:28:04.082 lat (msec): min=2, max=119, avg=56.12, stdev=17.27 00:28:04.082 clat percentiles (msec): 00:28:04.082 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 42], 00:28:04.083 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 57], 60.00th=[ 60], 00:28:04.083 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 85], 00:28:04.083 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 120], 00:28:04.083 | 99.99th=[ 120] 00:28:04.083 bw ( KiB/s): min= 768, max= 1272, per=4.16%, avg=1120.42, stdev=156.39, samples=19 00:28:04.083 iops : min= 192, max= 318, avg=280.11, stdev=39.10, samples=19 00:28:04.083 lat (msec) : 4=0.25%, 10=0.91%, 20=0.32%, 50=41.45%, 100=55.77% 00:28:04.083 lat (msec) : 250=1.30% 00:28:04.083 cpu : usr=31.97%, sys=2.14%, ctx=913, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82124: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=281, BW=1126KiB/s (1153kB/s)(11.0MiB/10013msec) 00:28:04.083 slat (usec): min=2, max=8029, avg=22.84, stdev=261.29 00:28:04.083 clat (msec): min=20, max=107, avg=56.70, stdev=15.13 00:28:04.083 lat (msec): min=20, max=107, avg=56.73, stdev=15.14 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:28:04.083 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 00:28:04.083 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 79], 95.00th=[ 85], 00:28:04.083 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:28:04.083 | 99.99th=[ 108] 00:28:04.083 bw ( KiB/s): min= 816, max= 1304, per=4.17%, avg=1123.50, stdev=123.95, samples=20 00:28:04.083 iops : min= 204, max= 326, avg=280.85, stdev=30.99, samples=20 00:28:04.083 lat (msec) : 50=39.41%, 100=60.31%, 250=0.28% 00:28:04.083 cpu : usr=31.21%, sys=2.02%, ctx=863, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82125: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=277, BW=1112KiB/s (1138kB/s)(10.9MiB/10028msec) 00:28:04.083 slat (usec): min=6, max=8029, avg=20.98, stdev=227.61 00:28:04.083 clat (msec): min=8, max=124, avg=57.44, stdev=16.01 00:28:04.083 lat (msec): min=8, max=124, avg=57.46, stdev=16.01 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:28:04.083 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:28:04.083 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 85], 00:28:04.083 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 125], 00:28:04.083 | 99.99th=[ 125] 00:28:04.083 bw ( KiB/s): min= 768, max= 1396, per=4.11%, avg=1108.60, stdev=164.18, samples=20 00:28:04.083 iops : min= 192, max= 349, avg=277.15, stdev=41.05, samples=20 00:28:04.083 lat (msec) : 10=0.50%, 20=0.07%, 50=36.31%, 100=61.68%, 250=1.44% 00:28:04.083 cpu : usr=35.49%, sys=2.83%, ctx=1043, majf=0, minf=10 00:28:04.083 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82126: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10012msec) 00:28:04.083 slat (usec): min=2, max=4040, avg=16.10, stdev=97.87 00:28:04.083 clat (msec): min=21, max=122, avg=55.80, stdev=15.95 00:28:04.083 lat (msec): min=21, max=122, avg=55.82, stdev=15.95 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:28:04.083 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 58], 00:28:04.083 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 85], 00:28:04.083 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 121], 99.95th=[ 123], 00:28:04.083 | 99.99th=[ 123] 00:28:04.083 bw ( KiB/s): min= 752, max= 1328, per=4.23%, avg=1139.50, stdev=157.68, samples=20 00:28:04.083 iops : min= 188, max= 332, avg=284.85, stdev=39.42, samples=20 00:28:04.083 lat (msec) : 50=39.01%, 100=59.87%, 250=1.12% 00:28:04.083 cpu : usr=40.61%, sys=2.83%, ctx=1358, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82127: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.5MiB/10004msec) 00:28:04.083 slat (usec): min=2, max=8034, avg=26.99, stdev=330.47 00:28:04.083 clat (msec): min=4, max=128, avg=54.48, stdev=16.49 00:28:04.083 lat (msec): min=4, max=128, avg=54.51, stdev=16.49 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:28:04.083 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 59], 00:28:04.083 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 84], 00:28:04.083 | 99.00th=[ 96], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 129], 00:28:04.083 | 99.99th=[ 129] 00:28:04.083 bw ( KiB/s): min= 944, max= 1328, per=4.30%, avg=1158.42, stdev=110.21, samples=19 00:28:04.083 iops : min= 236, max= 332, avg=289.58, stdev=27.56, samples=19 00:28:04.083 lat (msec) : 10=0.89%, 20=0.31%, 50=44.08%, 100=54.04%, 250=0.68% 00:28:04.083 cpu : usr=31.06%, sys=2.18%, ctx=907, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82128: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=274, BW=1099KiB/s (1125kB/s)(10.8MiB/10027msec) 00:28:04.083 slat (usec): min=3, max=8019, avg=18.51, stdev=215.73 00:28:04.083 clat (msec): min=8, max=124, avg=58.10, stdev=15.01 00:28:04.083 lat (msec): min=8, max=124, avg=58.11, stdev=15.00 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 48], 00:28:04.083 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:28:04.083 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 85], 00:28:04.083 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 108], 00:28:04.083 | 99.99th=[ 126] 00:28:04.083 bw ( KiB/s): min= 824, max= 1380, per=4.07%, avg=1097.45, stdev=118.22, samples=20 00:28:04.083 iops : min= 206, max= 345, avg=274.35, stdev=29.55, samples=20 00:28:04.083 lat (msec) : 10=0.51%, 20=0.07%, 50=32.89%, 100=65.92%, 250=0.62% 00:28:04.083 cpu : usr=31.62%, sys=2.08%, ctx=901, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=0.5%, 4=2.3%, 8=80.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=88.5%, 8=11.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.083 filename2: (groupid=0, jobs=1): err= 0: pid=82129: Wed May 15 14:06:00 2024 00:28:04.083 read: IOPS=272, BW=1091KiB/s (1118kB/s)(10.7MiB/10023msec) 00:28:04.083 slat (usec): min=6, max=8026, avg=26.55, stdev=304.88 00:28:04.083 clat (msec): min=24, max=115, avg=58.49, stdev=16.38 00:28:04.083 lat (msec): min=24, max=115, avg=58.51, stdev=16.38 00:28:04.083 clat percentiles (msec): 00:28:04.083 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:28:04.083 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 59], 00:28:04.083 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 89], 00:28:04.083 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 114], 99.95th=[ 114], 00:28:04.083 | 99.99th=[ 115] 00:28:04.083 bw ( KiB/s): min= 768, max= 1280, per=4.04%, avg=1089.60, stdev=159.89, samples=20 00:28:04.083 iops : min= 192, max= 320, avg=272.40, stdev=39.97, samples=20 00:28:04.083 lat (msec) : 50=32.76%, 100=66.00%, 250=1.24% 00:28:04.083 cpu : usr=34.64%, sys=2.70%, ctx=1174, majf=0, minf=9 00:28:04.083 IO depths : 1=0.1%, 2=1.4%, 4=5.9%, 8=76.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:04.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.083 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.084 filename2: (groupid=0, jobs=1): err= 0: pid=82130: Wed May 15 14:06:00 2024 00:28:04.084 read: IOPS=296, BW=1185KiB/s (1213kB/s)(11.6MiB/10002msec) 00:28:04.084 slat (usec): min=5, max=4035, avg=17.32, stdev=104.22 00:28:04.084 clat (msec): min=2, max=112, avg=53.93, stdev=16.42 00:28:04.084 lat (msec): min=2, max=112, avg=53.94, stdev=16.42 00:28:04.084 clat percentiles (msec): 00:28:04.084 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:28:04.084 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 58], 00:28:04.084 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 84], 00:28:04.084 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 112], 00:28:04.084 | 99.99th=[ 112] 00:28:04.084 bw ( KiB/s): min= 920, max= 1312, per=4.33%, avg=1168.00, stdev=109.66, samples=19 00:28:04.084 iops : min= 230, max= 328, avg=292.00, stdev=27.41, samples=19 00:28:04.084 lat (msec) : 4=0.20%, 10=0.74%, 20=0.44%, 50=42.56%, 100=55.75% 00:28:04.084 lat (msec) : 250=0.30% 00:28:04.084 cpu : usr=40.97%, sys=3.17%, ctx=1371, majf=0, minf=9 00:28:04.084 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:04.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.084 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.084 issued rwts: total=2963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.084 00:28:04.084 Run status group 0 (all jobs): 00:28:04.084 READ: bw=26.3MiB/s (27.6MB/s), 1075KiB/s-1185KiB/s (1100kB/s-1213kB/s), io=264MiB (277MB), run=10001-10040msec 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 bdev_null0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 [2024-05-15 14:06:00.890551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 bdev_null1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:04.084 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.085 { 00:28:04.085 "params": { 00:28:04.085 "name": "Nvme$subsystem", 00:28:04.085 "trtype": "$TEST_TRANSPORT", 00:28:04.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.085 "adrfam": "ipv4", 00:28:04.085 "trsvcid": "$NVMF_PORT", 00:28:04.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.085 "hdgst": ${hdgst:-false}, 00:28:04.085 "ddgst": ${ddgst:-false} 00:28:04.085 }, 00:28:04.085 "method": "bdev_nvme_attach_controller" 00:28:04.085 } 00:28:04.085 EOF 00:28:04.085 )") 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.085 { 00:28:04.085 "params": { 00:28:04.085 "name": "Nvme$subsystem", 00:28:04.085 "trtype": "$TEST_TRANSPORT", 00:28:04.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.085 "adrfam": "ipv4", 00:28:04.085 "trsvcid": "$NVMF_PORT", 00:28:04.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.085 "hdgst": ${hdgst:-false}, 00:28:04.085 "ddgst": ${ddgst:-false} 00:28:04.085 }, 00:28:04.085 "method": "bdev_nvme_attach_controller" 00:28:04.085 } 00:28:04.085 EOF 00:28:04.085 )") 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:04.085 "params": { 00:28:04.085 "name": "Nvme0", 00:28:04.085 "trtype": "tcp", 00:28:04.085 "traddr": "10.0.0.2", 00:28:04.085 "adrfam": "ipv4", 00:28:04.085 "trsvcid": "4420", 00:28:04.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.085 "hdgst": false, 00:28:04.085 "ddgst": false 00:28:04.085 }, 00:28:04.085 "method": "bdev_nvme_attach_controller" 00:28:04.085 },{ 00:28:04.085 "params": { 00:28:04.085 "name": "Nvme1", 00:28:04.085 "trtype": "tcp", 00:28:04.085 "traddr": "10.0.0.2", 00:28:04.085 "adrfam": "ipv4", 00:28:04.085 "trsvcid": "4420", 00:28:04.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.085 "hdgst": false, 00:28:04.085 "ddgst": false 00:28:04.085 }, 00:28:04.085 "method": "bdev_nvme_attach_controller" 00:28:04.085 }' 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:04.085 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:04.085 14:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:04.085 14:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:04.085 14:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:04.085 14:06:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.085 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:04.085 ... 00:28:04.085 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:04.085 ... 00:28:04.085 fio-3.35 00:28:04.085 Starting 4 threads 00:28:08.280 00:28:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=82276: Wed May 15 14:06:06 2024 00:28:08.280 read: IOPS=2383, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5001msec) 00:28:08.280 slat (nsec): min=5997, max=76277, avg=12415.77, stdev=2917.50 00:28:08.280 clat (usec): min=1235, max=12258, avg=3317.17, stdev=548.07 00:28:08.280 lat (usec): min=1248, max=12272, avg=3329.59, stdev=547.43 00:28:08.280 clat percentiles (usec): 00:28:08.280 | 1.00th=[ 2671], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:28:08.280 | 30.00th=[ 2835], 40.00th=[ 3097], 50.00th=[ 3425], 60.00th=[ 3458], 00:28:08.280 | 70.00th=[ 3490], 80.00th=[ 3589], 90.00th=[ 4080], 95.00th=[ 4293], 00:28:08.280 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 7439], 99.95th=[ 8979], 00:28:08.280 | 99.99th=[10814] 00:28:08.280 bw ( KiB/s): min=17792, max=20192, per=23.20%, avg=19158.22, stdev=1105.53, samples=9 00:28:08.280 iops : min= 2224, max= 2524, avg=2394.78, stdev=138.19, samples=9 00:28:08.280 lat (msec) : 2=0.29%, 4=87.11%, 10=12.58%, 20=0.03% 00:28:08.280 cpu : usr=89.46%, sys=9.86%, ctx=11, majf=0, minf=0 00:28:08.280 IO depths : 1=0.1%, 2=11.6%, 4=61.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 issued rwts: total=11918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:08.280 filename0: (groupid=0, jobs=1): err= 0: pid=82277: Wed May 15 14:06:06 2024 00:28:08.280 read: IOPS=2777, BW=21.7MiB/s (22.8MB/s)(109MiB/5002msec) 00:28:08.280 slat (nsec): min=5841, max=46619, avg=9480.97, stdev=3085.58 00:28:08.280 clat (usec): min=897, max=12855, avg=2856.20, stdev=796.04 00:28:08.280 lat (usec): min=904, max=12862, avg=2865.68, stdev=795.45 00:28:08.280 clat percentiles (usec): 00:28:08.280 | 1.00th=[ 1647], 5.00th=[ 1680], 10.00th=[ 1713], 20.00th=[ 1942], 00:28:08.280 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2966], 00:28:08.280 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 4015], 95.00th=[ 4293], 00:28:08.280 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 7439], 99.95th=[ 8848], 00:28:08.280 | 99.99th=[12387] 00:28:08.280 bw ( KiB/s): min=19968, max=25024, per=26.55%, avg=21926.22, stdev=2306.75, samples=9 00:28:08.280 iops : min= 2496, max= 3128, avg=2740.78, stdev=288.34, samples=9 00:28:08.280 lat (usec) : 1000=0.38% 00:28:08.280 lat (msec) : 2=21.63%, 4=67.70%, 10=10.26%, 20=0.03% 00:28:08.280 cpu : usr=90.46%, sys=8.68%, ctx=23, majf=0, minf=0 00:28:08.280 IO depths : 1=0.1%, 2=0.5%, 4=67.2%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 issued rwts: total=13893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:08.280 filename1: (groupid=0, jobs=1): err= 0: pid=82278: Wed May 15 14:06:06 2024 00:28:08.280 read: IOPS=2780, BW=21.7MiB/s (22.8MB/s)(109MiB/5003msec) 00:28:08.280 slat (nsec): min=5864, max=64613, avg=9927.94, stdev=3341.91 00:28:08.280 clat (usec): min=862, max=12309, avg=2852.48, stdev=791.46 00:28:08.280 lat (usec): min=869, max=12317, avg=2862.41, stdev=790.53 00:28:08.280 clat percentiles (usec): 00:28:08.280 | 1.00th=[ 1647], 5.00th=[ 1680], 10.00th=[ 1713], 20.00th=[ 1942], 00:28:08.280 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2966], 00:28:08.280 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 4015], 95.00th=[ 4228], 00:28:08.280 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 7373], 99.95th=[ 8848], 00:28:08.280 | 99.99th=[10814] 00:28:08.280 bw ( KiB/s): min=19984, max=25024, per=26.59%, avg=21952.00, stdev=2285.29, samples=9 00:28:08.280 iops : min= 2498, max= 3128, avg=2744.00, stdev=285.66, samples=9 00:28:08.280 lat (usec) : 1000=0.40% 00:28:08.280 lat (msec) : 2=21.50%, 4=68.10%, 10=9.97%, 20=0.02% 00:28:08.280 cpu : usr=90.54%, sys=8.64%, ctx=14, majf=0, minf=0 00:28:08.280 IO depths : 1=0.1%, 2=0.5%, 4=67.2%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 issued rwts: total=13909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:08.280 filename1: (groupid=0, jobs=1): err= 0: pid=82279: Wed May 15 14:06:06 2024 00:28:08.280 read: IOPS=2382, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5002msec) 00:28:08.280 slat (nsec): min=6226, max=49741, avg=12432.62, stdev=2742.91 00:28:08.280 clat (usec): min=1237, max=12249, avg=3318.79, stdev=548.61 00:28:08.280 lat (usec): min=1250, max=12263, avg=3331.23, stdev=549.17 00:28:08.280 clat percentiles (usec): 00:28:08.280 | 1.00th=[ 2671], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:28:08.280 | 30.00th=[ 2868], 40.00th=[ 3097], 50.00th=[ 3425], 60.00th=[ 3458], 00:28:08.280 | 70.00th=[ 3490], 80.00th=[ 3589], 90.00th=[ 4080], 95.00th=[ 4293], 00:28:08.280 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 7439], 99.95th=[ 8979], 00:28:08.280 | 99.99th=[10814] 00:28:08.280 bw ( KiB/s): min=17792, max=20192, per=23.20%, avg=19153.78, stdev=1101.40, samples=9 00:28:08.280 iops : min= 2224, max= 2524, avg=2394.22, stdev=137.68, samples=9 00:28:08.280 lat (msec) : 2=0.29%, 4=87.10%, 10=12.59%, 20=0.03% 00:28:08.280 cpu : usr=90.36%, sys=8.90%, ctx=12, majf=0, minf=10 00:28:08.280 IO depths : 1=0.1%, 2=11.6%, 4=61.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.280 issued rwts: total=11918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:08.280 00:28:08.280 Run status group 0 (all jobs): 00:28:08.280 READ: bw=80.6MiB/s (84.6MB/s), 18.6MiB/s-21.7MiB/s (19.5MB/s-22.8MB/s), io=403MiB (423MB), run=5001-5003msec 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 ************************************ 00:28:08.540 END TEST fio_dif_rand_params 00:28:08.540 ************************************ 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.540 00:28:08.540 real 0m23.492s 00:28:08.540 user 2m1.782s 00:28:08.540 sys 0m10.631s 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.540 14:06:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 14:06:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:08.540 14:06:07 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:08.540 14:06:07 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.540 14:06:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 ************************************ 00:28:08.540 START TEST fio_dif_digest 00:28:08.540 ************************************ 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:08.540 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.541 bdev_null0 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.541 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.541 [2024-05-15 14:06:07.095963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.801 { 00:28:08.801 "params": { 00:28:08.801 "name": "Nvme$subsystem", 00:28:08.801 "trtype": "$TEST_TRANSPORT", 00:28:08.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.801 "adrfam": "ipv4", 00:28:08.801 "trsvcid": "$NVMF_PORT", 00:28:08.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.801 "hdgst": ${hdgst:-false}, 00:28:08.801 "ddgst": ${ddgst:-false} 00:28:08.801 }, 00:28:08.801 "method": "bdev_nvme_attach_controller" 00:28:08.801 } 00:28:08.801 EOF 00:28:08.801 )") 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.801 "params": { 00:28:08.801 "name": "Nvme0", 00:28:08.801 "trtype": "tcp", 00:28:08.801 "traddr": "10.0.0.2", 00:28:08.801 "adrfam": "ipv4", 00:28:08.801 "trsvcid": "4420", 00:28:08.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.801 "hdgst": true, 00:28:08.801 "ddgst": true 00:28:08.801 }, 00:28:08.801 "method": "bdev_nvme_attach_controller" 00:28:08.801 }' 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:08.801 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:08.802 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:08.802 14:06:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.802 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:08.802 ... 00:28:08.802 fio-3.35 00:28:08.802 Starting 3 threads 00:28:21.016 00:28:21.016 filename0: (groupid=0, jobs=1): err= 0: pid=82385: Wed May 15 14:06:17 2024 00:28:21.016 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(354MiB/10004msec) 00:28:21.016 slat (nsec): min=6280, max=78989, avg=14971.26, stdev=6764.52 00:28:21.016 clat (usec): min=7021, max=23159, avg=10561.03, stdev=619.97 00:28:21.016 lat (usec): min=7030, max=23170, avg=10576.01, stdev=620.47 00:28:21.016 clat percentiles (usec): 00:28:21.016 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:28:21.016 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:28:21.016 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:28:21.016 | 99.00th=[11731], 99.50th=[11994], 99.90th=[23200], 99.95th=[23200], 00:28:21.016 | 99.99th=[23200] 00:28:21.016 bw ( KiB/s): min=33792, max=37632, per=33.33%, avg=36213.37, stdev=1259.00, samples=19 00:28:21.016 iops : min= 264, max= 294, avg=282.89, stdev= 9.83, samples=19 00:28:21.016 lat (msec) : 10=0.32%, 20=99.58%, 50=0.11% 00:28:21.016 cpu : usr=88.95%, sys=10.49%, ctx=19, majf=0, minf=9 00:28:21.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:21.016 filename0: (groupid=0, jobs=1): err= 0: pid=82386: Wed May 15 14:06:17 2024 00:28:21.016 read: IOPS=282, BW=35.4MiB/s (37.1MB/s)(354MiB/10001msec) 00:28:21.016 slat (nsec): min=5492, max=60944, avg=17725.93, stdev=10307.18 00:28:21.016 clat (usec): min=9168, max=23055, avg=10563.23, stdev=620.40 00:28:21.016 lat (usec): min=9175, max=23069, avg=10580.96, stdev=620.04 00:28:21.016 clat percentiles (usec): 00:28:21.016 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:28:21.016 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:28:21.016 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:28:21.016 | 99.00th=[11731], 99.50th=[11994], 99.90th=[22938], 99.95th=[22938], 00:28:21.016 | 99.99th=[22938] 00:28:21.016 bw ( KiB/s): min=34560, max=37632, per=33.33%, avg=36217.26, stdev=1152.37, samples=19 00:28:21.016 iops : min= 270, max= 294, avg=282.95, stdev= 9.00, samples=19 00:28:21.016 lat (msec) : 10=0.11%, 20=99.79%, 50=0.11% 00:28:21.016 cpu : usr=91.01%, sys=8.49%, ctx=16, majf=0, minf=0 00:28:21.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 issued rwts: total=2829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:21.016 filename0: (groupid=0, jobs=1): err= 0: pid=82387: Wed May 15 14:06:17 2024 00:28:21.016 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(354MiB/10005msec) 00:28:21.016 slat (nsec): min=6209, max=61547, avg=18214.83, stdev=9961.42 00:28:21.016 clat (usec): min=7205, max=27566, avg=10554.51, stdev=694.92 00:28:21.016 lat (usec): min=7212, max=27581, avg=10572.73, stdev=694.82 00:28:21.016 clat percentiles (usec): 00:28:21.016 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:28:21.016 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:28:21.016 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:28:21.016 | 99.00th=[11600], 99.50th=[11994], 99.90th=[27657], 99.95th=[27657], 00:28:21.016 | 99.99th=[27657] 00:28:21.016 bw ( KiB/s): min=33792, max=37632, per=33.33%, avg=36217.26, stdev=1286.72, samples=19 00:28:21.016 iops : min= 264, max= 294, avg=282.95, stdev=10.05, samples=19 00:28:21.016 lat (msec) : 10=0.21%, 20=99.68%, 50=0.11% 00:28:21.016 cpu : usr=90.41%, sys=8.81%, ctx=47, majf=0, minf=0 00:28:21.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.016 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:21.016 00:28:21.016 Run status group 0 (all jobs): 00:28:21.016 READ: bw=106MiB/s (111MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=1062MiB (1113MB), run=10001-10005msec 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.016 ************************************ 00:28:21.016 END TEST fio_dif_digest 00:28:21.016 ************************************ 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.016 00:28:21.016 real 0m11.003s 00:28:21.016 user 0m27.642s 00:28:21.016 sys 0m3.079s 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.016 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.016 14:06:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:21.016 14:06:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.016 rmmod nvme_tcp 00:28:21.016 rmmod nvme_fabrics 00:28:21.016 rmmod nvme_keyring 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 81622 ']' 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 81622 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 81622 ']' 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 81622 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81622 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.016 killing process with pid 81622 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81622' 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@965 -- # kill 81622 00:28:21.016 [2024-05-15 14:06:18.250962] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:21.016 14:06:18 nvmf_dif -- common/autotest_common.sh@970 -- # wait 81622 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:21.016 14:06:18 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:21.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:21.016 Waiting for block devices as requested 00:28:21.016 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.016 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.016 14:06:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:21.016 14:06:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.016 14:06:19 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:21.016 00:28:21.016 real 1m0.398s 00:28:21.016 user 3m45.049s 00:28:21.016 sys 0m23.797s 00:28:21.016 14:06:19 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.016 ************************************ 00:28:21.016 END TEST nvmf_dif 00:28:21.016 ************************************ 00:28:21.016 14:06:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:21.016 14:06:19 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:21.016 14:06:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:21.016 14:06:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.016 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:28:21.016 ************************************ 00:28:21.016 START TEST nvmf_abort_qd_sizes 00:28:21.016 ************************************ 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:21.016 * Looking for test storage... 00:28:21.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:21.016 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:21.017 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:21.275 Cannot find device "nvmf_tgt_br" 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:21.275 Cannot find device "nvmf_tgt_br2" 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:21.275 Cannot find device "nvmf_tgt_br" 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:21.275 Cannot find device "nvmf_tgt_br2" 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:21.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:21.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:21.275 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:21.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:28:21.534 00:28:21.534 --- 10.0.0.2 ping statistics --- 00:28:21.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.534 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:21.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:21.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:28:21.534 00:28:21.534 --- 10.0.0.3 ping statistics --- 00:28:21.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.534 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:21.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:28:21.534 00:28:21.534 --- 10.0.0.1 ping statistics --- 00:28:21.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.534 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:21.534 14:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:22.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.524 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.524 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=82994 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 82994 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 82994 ']' 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.524 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:22.784 [2024-05-15 14:06:21.111043] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:22.784 [2024-05-15 14:06:21.111455] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.784 [2024-05-15 14:06:21.238788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.784 [2024-05-15 14:06:21.332905] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.784 [2024-05-15 14:06:21.332944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.784 [2024-05-15 14:06:21.332954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.784 [2024-05-15 14:06:21.332962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.784 [2024-05-15 14:06:21.332968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.784 [2024-05-15 14:06:21.333078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.784 [2024-05-15 14:06:21.333182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.784 [2024-05-15 14:06:21.333272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.784 [2024-05-15 14:06:21.333331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.723 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:23.723 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:28:23.723 14:06:21 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.723 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.723 14:06:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 ************************************ 00:28:23.723 START TEST spdk_target_abort 00:28:23.723 ************************************ 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 spdk_targetn1 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 [2024-05-15 14:06:22.168468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.723 [2024-05-15 14:06:22.208372] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:23.723 [2024-05-15 14:06:22.208688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:23.723 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.724 14:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.040 Initializing NVMe Controllers 00:28:27.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.040 Initialization complete. Launching workers. 00:28:27.040 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11930, failed: 0 00:28:27.040 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1079, failed to submit 10851 00:28:27.040 success 760, unsuccess 319, failed 0 00:28:27.040 14:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.040 14:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.330 Initializing NVMe Controllers 00:28:30.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.330 Initialization complete. Launching workers. 00:28:30.330 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8983, failed: 0 00:28:30.330 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1174, failed to submit 7809 00:28:30.330 success 379, unsuccess 795, failed 0 00:28:30.330 14:06:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:30.330 14:06:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.616 Initializing NVMe Controllers 00:28:33.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.616 Initialization complete. Launching workers. 00:28:33.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35301, failed: 0 00:28:33.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2358, failed to submit 32943 00:28:33.616 success 551, unsuccess 1807, failed 0 00:28:33.616 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:33.616 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.616 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.616 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.616 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:33.617 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.617 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 82994 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 82994 ']' 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 82994 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82994 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:34.185 killing process with pid 82994 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82994' 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 82994 00:28:34.185 [2024-05-15 14:06:32.700968] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:34.185 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 82994 00:28:34.457 00:28:34.457 real 0m10.831s 00:28:34.457 user 0m40.345s 00:28:34.457 sys 0m3.466s 00:28:34.457 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:34.457 14:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.457 ************************************ 00:28:34.457 END TEST spdk_target_abort 00:28:34.457 ************************************ 00:28:34.457 14:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:34.458 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:34.458 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:34.458 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:34.458 ************************************ 00:28:34.458 START TEST kernel_target_abort 00:28:34.458 ************************************ 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:34.458 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:34.720 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:34.720 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:34.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:35.238 Waiting for block devices as requested 00:28:35.238 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.238 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:35.497 No valid GPT data, bailing 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:35.497 No valid GPT data, bailing 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:35.497 No valid GPT data, bailing 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:35.497 14:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:35.497 No valid GPT data, bailing 00:28:35.497 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:35.756 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:35.756 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:35.756 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:35.756 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c --hostid=0861b14b-2c7f-48b6-89d0-4545a86e1b4c -a 10.0.0.1 -t tcp -s 4420 00:28:35.757 00:28:35.757 Discovery Log Number of Records 2, Generation counter 2 00:28:35.757 =====Discovery Log Entry 0====== 00:28:35.757 trtype: tcp 00:28:35.757 adrfam: ipv4 00:28:35.757 subtype: current discovery subsystem 00:28:35.757 treq: not specified, sq flow control disable supported 00:28:35.757 portid: 1 00:28:35.757 trsvcid: 4420 00:28:35.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:35.757 traddr: 10.0.0.1 00:28:35.757 eflags: none 00:28:35.757 sectype: none 00:28:35.757 =====Discovery Log Entry 1====== 00:28:35.757 trtype: tcp 00:28:35.757 adrfam: ipv4 00:28:35.757 subtype: nvme subsystem 00:28:35.757 treq: not specified, sq flow control disable supported 00:28:35.757 portid: 1 00:28:35.757 trsvcid: 4420 00:28:35.757 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:35.757 traddr: 10.0.0.1 00:28:35.757 eflags: none 00:28:35.757 sectype: none 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.757 14:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:39.045 Initializing NVMe Controllers 00:28:39.045 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.045 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:39.045 Initialization complete. Launching workers. 00:28:39.045 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38615, failed: 0 00:28:39.046 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38615, failed to submit 0 00:28:39.046 success 0, unsuccess 38615, failed 0 00:28:39.046 14:06:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:39.046 14:06:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:42.375 Initializing NVMe Controllers 00:28:42.375 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:42.375 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:42.375 Initialization complete. Launching workers. 00:28:42.375 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74143, failed: 0 00:28:42.375 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38843, failed to submit 35300 00:28:42.375 success 0, unsuccess 38843, failed 0 00:28:42.375 14:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:42.375 14:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:45.683 Initializing NVMe Controllers 00:28:45.683 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:45.683 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:45.684 Initialization complete. Launching workers. 00:28:45.684 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104945, failed: 0 00:28:45.684 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26236, failed to submit 78709 00:28:45.684 success 0, unsuccess 26236, failed 0 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:45.684 14:06:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:46.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:49.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:49.541 00:28:49.541 real 0m14.827s 00:28:49.541 user 0m6.178s 00:28:49.541 sys 0m5.918s 00:28:49.541 14:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:49.541 14:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.541 ************************************ 00:28:49.541 END TEST kernel_target_abort 00:28:49.541 ************************************ 00:28:49.541 14:06:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:49.541 14:06:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:49.541 14:06:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.541 14:06:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.801 rmmod nvme_tcp 00:28:49.801 rmmod nvme_fabrics 00:28:49.801 rmmod nvme_keyring 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 82994 ']' 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 82994 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 82994 ']' 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 82994 00:28:49.801 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (82994) - No such process 00:28:49.801 Process with pid 82994 is not found 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 82994 is not found' 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:49.801 14:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:50.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.369 Waiting for block devices as requested 00:28:50.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:50.628 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:50.628 ************************************ 00:28:50.628 END TEST nvmf_abort_qd_sizes 00:28:50.628 ************************************ 00:28:50.628 00:28:50.628 real 0m29.724s 00:28:50.628 user 0m47.706s 00:28:50.628 sys 0m11.279s 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:50.628 14:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.628 14:06:49 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:50.628 14:06:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:50.629 14:06:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:50.629 14:06:49 -- common/autotest_common.sh@10 -- # set +x 00:28:50.896 ************************************ 00:28:50.896 START TEST keyring_file 00:28:50.896 ************************************ 00:28:50.896 14:06:49 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:50.896 * Looking for test storage... 00:28:50.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:50.896 14:06:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:50.896 14:06:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0861b14b-2c7f-48b6-89d0-4545a86e1b4c 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:50.896 14:06:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.896 14:06:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.896 14:06:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.896 14:06:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.896 14:06:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.896 14:06:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.896 14:06:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:50.896 14:06:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.896 14:06:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.896 14:06:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:50.896 14:06:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:50.896 14:06:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vTHS4ojLMe 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vTHS4ojLMe 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vTHS4ojLMe 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vTHS4ojLMe 00:28:50.897 14:06:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7qrmZhJYGi 00:28:50.897 14:06:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:50.897 14:06:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:51.156 14:06:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7qrmZhJYGi 00:28:51.156 14:06:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7qrmZhJYGi 00:28:51.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.156 14:06:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7qrmZhJYGi 00:28:51.156 14:06:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=83881 00:28:51.156 14:06:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 83881 00:28:51.156 14:06:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 83881 ']' 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:51.156 14:06:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.156 [2024-05-15 14:06:49.543277] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:51.156 [2024-05-15 14:06:49.543579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83881 ] 00:28:51.156 [2024-05-15 14:06:49.689938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.415 [2024-05-15 14:06:49.783447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:51.982 14:06:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 [2024-05-15 14:06:50.373339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.982 null0 00:28:51.982 [2024-05-15 14:06:50.405218] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:51.982 [2024-05-15 14:06:50.405284] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:51.982 [2024-05-15 14:06:50.405467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:51.982 [2024-05-15 14:06:50.413229] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.982 14:06:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 [2024-05-15 14:06:50.429218] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:51.982 request: 00:28:51.982 { 00:28:51.982 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.982 "secure_channel": false, 00:28:51.982 "listen_address": { 00:28:51.982 "trtype": "tcp", 00:28:51.982 "traddr": "127.0.0.1", 00:28:51.982 "trsvcid": "4420" 00:28:51.982 }, 00:28:51.982 "method": "nvmf_subsystem_add_listener", 00:28:51.982 "req_id": 1 00:28:51.982 } 00:28:51.982 Got JSON-RPC error response 00:28:51.982 response: 00:28:51.982 { 00:28:51.982 "code": -32602, 00:28:51.982 "message": "Invalid parameters" 00:28:51.982 } 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.982 14:06:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=83903 00:28:51.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.982 14:06:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 83903 /var/tmp/bperf.sock 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 83903 ']' 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.982 14:06:50 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:51.982 14:06:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 [2024-05-15 14:06:50.507619] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:28:51.983 [2024-05-15 14:06:50.508149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83903 ] 00:28:52.242 [2024-05-15 14:06:50.667130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.242 [2024-05-15 14:06:50.766187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.810 14:06:51 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:52.810 14:06:51 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:52.810 14:06:51 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:52.810 14:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:53.069 14:06:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7qrmZhJYGi 00:28:53.069 14:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7qrmZhJYGi 00:28:53.328 14:06:51 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:53.328 14:06:51 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:53.328 14:06:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.328 14:06:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.328 14:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.587 14:06:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.vTHS4ojLMe == \/\t\m\p\/\t\m\p\.\v\T\H\S\4\o\j\L\M\e ]] 00:28:53.587 14:06:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:53.587 14:06:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:53.587 14:06:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.587 14:06:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:53.587 14:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.587 14:06:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7qrmZhJYGi == \/\t\m\p\/\t\m\p\.\7\q\r\m\Z\h\J\Y\G\i ]] 00:28:53.587 14:06:52 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:53.587 14:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.587 14:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.587 14:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.587 14:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.587 14:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.845 14:06:52 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:53.845 14:06:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:53.845 14:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.845 14:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:53.845 14:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.845 14:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.845 14:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:54.103 14:06:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:54.104 14:06:52 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.104 14:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.361 [2024-05-15 14:06:52.683940] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:54.361 nvme0n1 00:28:54.361 14:06:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:54.361 14:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.361 14:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.361 14:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.361 14:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.361 14:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.619 14:06:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:54.619 14:06:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:54.619 14:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:54.619 14:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.619 14:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.619 14:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:54.619 14:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.878 14:06:53 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:54.878 14:06:53 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.878 Running I/O for 1 seconds... 00:28:55.814 00:28:55.814 Latency(us) 00:28:55.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.814 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:55.814 nvme0n1 : 1.01 13506.60 52.76 0.00 0.00 9449.63 4053.23 14317.91 00:28:55.814 =================================================================================================================== 00:28:55.814 Total : 13506.60 52.76 0.00 0.00 9449.63 4053.23 14317.91 00:28:55.814 0 00:28:55.814 14:06:54 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:55.814 14:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:56.074 14:06:54 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:56.074 14:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.074 14:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:56.074 14:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.074 14:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.074 14:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:56.333 14:06:54 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:56.333 14:06:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:56.333 14:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:56.333 14:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.333 14:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:56.333 14:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.333 14:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.592 14:06:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:56.592 14:06:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:56.592 14:06:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:56.592 14:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:56.592 [2024-05-15 14:06:55.081996] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:56.592 [2024-05-15 14:06:55.082609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7242b0 (107): Transport endpoint is not connected 00:28:56.592 [2024-05-15 14:06:55.083590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7242b0 (9): Bad file descriptor 00:28:56.592 [2024-05-15 14:06:55.084586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:56.592 [2024-05-15 14:06:55.084616] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:56.592 [2024-05-15 14:06:55.084627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:56.592 request: 00:28:56.592 { 00:28:56.592 "name": "nvme0", 00:28:56.592 "trtype": "tcp", 00:28:56.592 "traddr": "127.0.0.1", 00:28:56.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.592 "adrfam": "ipv4", 00:28:56.592 "trsvcid": "4420", 00:28:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.592 "psk": "key1", 00:28:56.592 "method": "bdev_nvme_attach_controller", 00:28:56.592 "req_id": 1 00:28:56.592 } 00:28:56.592 Got JSON-RPC error response 00:28:56.592 response: 00:28:56.592 { 00:28:56.592 "code": -32602, 00:28:56.592 "message": "Invalid parameters" 00:28:56.592 } 00:28:56.592 14:06:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:56.592 14:06:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:56.592 14:06:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:56.592 14:06:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:56.592 14:06:55 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:56.592 14:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:56.592 14:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.592 14:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.592 14:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.592 14:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:56.852 14:06:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:56.852 14:06:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:56.852 14:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:56.852 14:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.852 14:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.852 14:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:56.852 14:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.112 14:06:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:57.112 14:06:55 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:57.112 14:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:57.371 14:06:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:57.371 14:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:57.629 14:06:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:57.629 14:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.629 14:06:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:57.629 14:06:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:57.629 14:06:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vTHS4ojLMe 00:28:57.629 14:06:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.629 14:06:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:57.629 14:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:57.888 [2024-05-15 14:06:56.308990] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vTHS4ojLMe': 0100660 00:28:57.888 [2024-05-15 14:06:56.309059] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:57.888 request: 00:28:57.888 { 00:28:57.888 "name": "key0", 00:28:57.888 "path": "/tmp/tmp.vTHS4ojLMe", 00:28:57.888 "method": "keyring_file_add_key", 00:28:57.888 "req_id": 1 00:28:57.888 } 00:28:57.888 Got JSON-RPC error response 00:28:57.888 response: 00:28:57.888 { 00:28:57.888 "code": -1, 00:28:57.888 "message": "Operation not permitted" 00:28:57.888 } 00:28:57.888 14:06:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:57.888 14:06:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:57.888 14:06:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:57.888 14:06:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:57.888 14:06:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vTHS4ojLMe 00:28:57.888 14:06:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:57.888 14:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vTHS4ojLMe 00:28:58.146 14:06:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vTHS4ojLMe 00:28:58.146 14:06:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:58.147 14:06:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:58.147 14:06:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.147 14:06:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.147 14:06:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.147 14:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.414 14:06:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:58.414 14:06:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.414 14:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.414 [2024-05-15 14:06:56.908126] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vTHS4ojLMe': No such file or directory 00:28:58.414 [2024-05-15 14:06:56.908169] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:58.414 [2024-05-15 14:06:56.908193] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:58.414 [2024-05-15 14:06:56.908201] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:58.414 [2024-05-15 14:06:56.908209] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:58.414 request: 00:28:58.414 { 00:28:58.414 "name": "nvme0", 00:28:58.414 "trtype": "tcp", 00:28:58.414 "traddr": "127.0.0.1", 00:28:58.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:58.414 "adrfam": "ipv4", 00:28:58.414 "trsvcid": "4420", 00:28:58.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.414 "psk": "key0", 00:28:58.414 "method": "bdev_nvme_attach_controller", 00:28:58.414 "req_id": 1 00:28:58.414 } 00:28:58.414 Got JSON-RPC error response 00:28:58.414 response: 00:28:58.414 { 00:28:58.414 "code": -19, 00:28:58.414 "message": "No such device" 00:28:58.414 } 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:58.414 14:06:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:58.414 14:06:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:58.414 14:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:58.676 14:06:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CszXc9CBZX 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:58.676 14:06:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CszXc9CBZX 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CszXc9CBZX 00:28:58.676 14:06:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.CszXc9CBZX 00:28:58.676 14:06:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CszXc9CBZX 00:28:58.676 14:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CszXc9CBZX 00:28:58.935 14:06:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.935 14:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.194 nvme0n1 00:28:59.194 14:06:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:59.194 14:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.194 14:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.194 14:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.194 14:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.194 14:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.453 14:06:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:59.453 14:06:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:59.453 14:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:59.713 14:06:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:59.713 14:06:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.713 14:06:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:59.713 14:06:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.713 14:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.971 14:06:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:59.971 14:06:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:59.971 14:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:00.230 14:06:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:00.230 14:06:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:00.230 14:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.489 14:06:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:00.489 14:06:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CszXc9CBZX 00:29:00.489 14:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CszXc9CBZX 00:29:00.489 14:06:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7qrmZhJYGi 00:29:00.489 14:06:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7qrmZhJYGi 00:29:00.747 14:06:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:00.747 14:06:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.007 nvme0n1 00:29:01.007 14:06:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:01.007 14:06:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:01.266 14:06:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:01.266 "subsystems": [ 00:29:01.266 { 00:29:01.266 "subsystem": "keyring", 00:29:01.266 "config": [ 00:29:01.266 { 00:29:01.266 "method": "keyring_file_add_key", 00:29:01.266 "params": { 00:29:01.266 "name": "key0", 00:29:01.266 "path": "/tmp/tmp.CszXc9CBZX" 00:29:01.266 } 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "method": "keyring_file_add_key", 00:29:01.266 "params": { 00:29:01.266 "name": "key1", 00:29:01.266 "path": "/tmp/tmp.7qrmZhJYGi" 00:29:01.266 } 00:29:01.266 } 00:29:01.266 ] 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "subsystem": "iobuf", 00:29:01.266 "config": [ 00:29:01.266 { 00:29:01.266 "method": "iobuf_set_options", 00:29:01.266 "params": { 00:29:01.266 "small_pool_count": 8192, 00:29:01.266 "large_pool_count": 1024, 00:29:01.266 "small_bufsize": 8192, 00:29:01.266 "large_bufsize": 135168 00:29:01.266 } 00:29:01.266 } 00:29:01.266 ] 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "subsystem": "sock", 00:29:01.266 "config": [ 00:29:01.266 { 00:29:01.266 "method": "sock_impl_set_options", 00:29:01.266 "params": { 00:29:01.266 "impl_name": "uring", 00:29:01.266 "recv_buf_size": 2097152, 00:29:01.266 "send_buf_size": 2097152, 00:29:01.266 "enable_recv_pipe": true, 00:29:01.266 "enable_quickack": false, 00:29:01.266 "enable_placement_id": 0, 00:29:01.266 "enable_zerocopy_send_server": false, 00:29:01.266 "enable_zerocopy_send_client": false, 00:29:01.266 "zerocopy_threshold": 0, 00:29:01.266 "tls_version": 0, 00:29:01.266 "enable_ktls": false 00:29:01.266 } 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "method": "sock_impl_set_options", 00:29:01.266 "params": { 00:29:01.266 "impl_name": "posix", 00:29:01.266 "recv_buf_size": 2097152, 00:29:01.266 "send_buf_size": 2097152, 00:29:01.266 "enable_recv_pipe": true, 00:29:01.266 "enable_quickack": false, 00:29:01.266 "enable_placement_id": 0, 00:29:01.266 "enable_zerocopy_send_server": true, 00:29:01.266 "enable_zerocopy_send_client": false, 00:29:01.266 "zerocopy_threshold": 0, 00:29:01.266 "tls_version": 0, 00:29:01.266 "enable_ktls": false 00:29:01.266 } 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "method": "sock_impl_set_options", 00:29:01.266 "params": { 00:29:01.266 "impl_name": "ssl", 00:29:01.266 "recv_buf_size": 4096, 00:29:01.266 "send_buf_size": 4096, 00:29:01.266 "enable_recv_pipe": true, 00:29:01.266 "enable_quickack": false, 00:29:01.266 "enable_placement_id": 0, 00:29:01.266 "enable_zerocopy_send_server": true, 00:29:01.266 "enable_zerocopy_send_client": false, 00:29:01.266 "zerocopy_threshold": 0, 00:29:01.266 "tls_version": 0, 00:29:01.266 "enable_ktls": false 00:29:01.266 } 00:29:01.266 } 00:29:01.266 ] 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "subsystem": "vmd", 00:29:01.266 "config": [] 00:29:01.266 }, 00:29:01.266 { 00:29:01.266 "subsystem": "accel", 00:29:01.266 "config": [ 00:29:01.266 { 00:29:01.266 "method": "accel_set_options", 00:29:01.266 "params": { 00:29:01.267 "small_cache_size": 128, 00:29:01.267 "large_cache_size": 16, 00:29:01.267 "task_count": 2048, 00:29:01.267 "sequence_count": 2048, 00:29:01.267 "buf_count": 2048 00:29:01.267 } 00:29:01.267 } 00:29:01.267 ] 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "subsystem": "bdev", 00:29:01.267 "config": [ 00:29:01.267 { 00:29:01.267 "method": "bdev_set_options", 00:29:01.267 "params": { 00:29:01.267 "bdev_io_pool_size": 65535, 00:29:01.267 "bdev_io_cache_size": 256, 00:29:01.267 "bdev_auto_examine": true, 00:29:01.267 "iobuf_small_cache_size": 128, 00:29:01.267 "iobuf_large_cache_size": 16 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_raid_set_options", 00:29:01.267 "params": { 00:29:01.267 "process_window_size_kb": 1024 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_iscsi_set_options", 00:29:01.267 "params": { 00:29:01.267 "timeout_sec": 30 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_nvme_set_options", 00:29:01.267 "params": { 00:29:01.267 "action_on_timeout": "none", 00:29:01.267 "timeout_us": 0, 00:29:01.267 "timeout_admin_us": 0, 00:29:01.267 "keep_alive_timeout_ms": 10000, 00:29:01.267 "arbitration_burst": 0, 00:29:01.267 "low_priority_weight": 0, 00:29:01.267 "medium_priority_weight": 0, 00:29:01.267 "high_priority_weight": 0, 00:29:01.267 "nvme_adminq_poll_period_us": 10000, 00:29:01.267 "nvme_ioq_poll_period_us": 0, 00:29:01.267 "io_queue_requests": 512, 00:29:01.267 "delay_cmd_submit": true, 00:29:01.267 "transport_retry_count": 4, 00:29:01.267 "bdev_retry_count": 3, 00:29:01.267 "transport_ack_timeout": 0, 00:29:01.267 "ctrlr_loss_timeout_sec": 0, 00:29:01.267 "reconnect_delay_sec": 0, 00:29:01.267 "fast_io_fail_timeout_sec": 0, 00:29:01.267 "disable_auto_failback": false, 00:29:01.267 "generate_uuids": false, 00:29:01.267 "transport_tos": 0, 00:29:01.267 "nvme_error_stat": false, 00:29:01.267 "rdma_srq_size": 0, 00:29:01.267 "io_path_stat": false, 00:29:01.267 "allow_accel_sequence": false, 00:29:01.267 "rdma_max_cq_size": 0, 00:29:01.267 "rdma_cm_event_timeout_ms": 0, 00:29:01.267 "dhchap_digests": [ 00:29:01.267 "sha256", 00:29:01.267 "sha384", 00:29:01.267 "sha512" 00:29:01.267 ], 00:29:01.267 "dhchap_dhgroups": [ 00:29:01.267 "null", 00:29:01.267 "ffdhe2048", 00:29:01.267 "ffdhe3072", 00:29:01.267 "ffdhe4096", 00:29:01.267 "ffdhe6144", 00:29:01.267 "ffdhe8192" 00:29:01.267 ] 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_nvme_attach_controller", 00:29:01.267 "params": { 00:29:01.267 "name": "nvme0", 00:29:01.267 "trtype": "TCP", 00:29:01.267 "adrfam": "IPv4", 00:29:01.267 "traddr": "127.0.0.1", 00:29:01.267 "trsvcid": "4420", 00:29:01.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.267 "prchk_reftag": false, 00:29:01.267 "prchk_guard": false, 00:29:01.267 "ctrlr_loss_timeout_sec": 0, 00:29:01.267 "reconnect_delay_sec": 0, 00:29:01.267 "fast_io_fail_timeout_sec": 0, 00:29:01.267 "psk": "key0", 00:29:01.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.267 "hdgst": false, 00:29:01.267 "ddgst": false 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_nvme_set_hotplug", 00:29:01.267 "params": { 00:29:01.267 "period_us": 100000, 00:29:01.267 "enable": false 00:29:01.267 } 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "method": "bdev_wait_for_examine" 00:29:01.267 } 00:29:01.267 ] 00:29:01.267 }, 00:29:01.267 { 00:29:01.267 "subsystem": "nbd", 00:29:01.267 "config": [] 00:29:01.267 } 00:29:01.267 ] 00:29:01.267 }' 00:29:01.267 14:06:59 keyring_file -- keyring/file.sh@114 -- # killprocess 83903 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 83903 ']' 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 83903 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83903 00:29:01.267 killing process with pid 83903 00:29:01.267 Received shutdown signal, test time was about 1.000000 seconds 00:29:01.267 00:29:01.267 Latency(us) 00:29:01.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.267 =================================================================================================================== 00:29:01.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83903' 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@965 -- # kill 83903 00:29:01.267 14:06:59 keyring_file -- common/autotest_common.sh@970 -- # wait 83903 00:29:01.527 14:06:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=84135 00:29:01.527 14:06:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84135 /var/tmp/bperf.sock 00:29:01.527 14:06:59 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 84135 ']' 00:29:01.527 14:06:59 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.527 14:06:59 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:01.527 14:06:59 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.527 14:06:59 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:01.527 14:06:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:01.527 "subsystems": [ 00:29:01.527 { 00:29:01.527 "subsystem": "keyring", 00:29:01.527 "config": [ 00:29:01.527 { 00:29:01.527 "method": "keyring_file_add_key", 00:29:01.527 "params": { 00:29:01.527 "name": "key0", 00:29:01.527 "path": "/tmp/tmp.CszXc9CBZX" 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "keyring_file_add_key", 00:29:01.527 "params": { 00:29:01.527 "name": "key1", 00:29:01.527 "path": "/tmp/tmp.7qrmZhJYGi" 00:29:01.527 } 00:29:01.527 } 00:29:01.527 ] 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "subsystem": "iobuf", 00:29:01.527 "config": [ 00:29:01.527 { 00:29:01.527 "method": "iobuf_set_options", 00:29:01.527 "params": { 00:29:01.527 "small_pool_count": 8192, 00:29:01.527 "large_pool_count": 1024, 00:29:01.527 "small_bufsize": 8192, 00:29:01.527 "large_bufsize": 135168 00:29:01.527 } 00:29:01.527 } 00:29:01.527 ] 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "subsystem": "sock", 00:29:01.527 "config": [ 00:29:01.527 { 00:29:01.527 "method": "sock_impl_set_options", 00:29:01.527 "params": { 00:29:01.527 "impl_name": "uring", 00:29:01.527 "recv_buf_size": 2097152, 00:29:01.527 "send_buf_size": 2097152, 00:29:01.527 "enable_recv_pipe": true, 00:29:01.527 "enable_quickack": false, 00:29:01.527 "enable_placement_id": 0, 00:29:01.527 "enable_zerocopy_send_server": false, 00:29:01.527 "enable_zerocopy_send_client": false, 00:29:01.527 "zerocopy_threshold": 0, 00:29:01.527 "tls_version": 0, 00:29:01.527 "enable_ktls": false 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "sock_impl_set_options", 00:29:01.527 "params": { 00:29:01.527 "impl_name": "posix", 00:29:01.527 "recv_buf_size": 2097152, 00:29:01.527 "send_buf_size": 2097152, 00:29:01.527 "enable_recv_pipe": true, 00:29:01.527 "enable_quickack": false, 00:29:01.527 "enable_placement_id": 0, 00:29:01.527 "enable_zerocopy_send_server": true, 00:29:01.527 "enable_zerocopy_send_client": false, 00:29:01.527 "zerocopy_threshold": 0, 00:29:01.527 "tls_version": 0, 00:29:01.527 "enable_ktls": false 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "sock_impl_set_options", 00:29:01.527 "params": { 00:29:01.527 "impl_name": "ssl", 00:29:01.527 "recv_buf_size": 4096, 00:29:01.527 "send_buf_size": 4096, 00:29:01.527 "enable_recv_pipe": true, 00:29:01.527 "enable_quickack": false, 00:29:01.527 "enable_placement_id": 0, 00:29:01.527 "enable_zerocopy_send_server": true, 00:29:01.527 "enable_zerocopy_send_client": false, 00:29:01.527 "zerocopy_threshold": 0, 00:29:01.527 "tls_version": 0, 00:29:01.527 "enable_ktls": false 00:29:01.527 } 00:29:01.527 } 00:29:01.527 ] 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "subsystem": "vmd", 00:29:01.527 "config": [] 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "subsystem": "accel", 00:29:01.527 "config": [ 00:29:01.527 { 00:29:01.527 "method": "accel_set_options", 00:29:01.527 "params": { 00:29:01.527 "small_cache_size": 128, 00:29:01.527 "large_cache_size": 16, 00:29:01.527 "task_count": 2048, 00:29:01.527 "sequence_count": 2048, 00:29:01.527 "buf_count": 2048 00:29:01.527 } 00:29:01.527 } 00:29:01.527 ] 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "subsystem": "bdev", 00:29:01.527 "config": [ 00:29:01.527 { 00:29:01.527 "method": "bdev_set_options", 00:29:01.527 "params": { 00:29:01.527 "bdev_io_pool_size": 65535, 00:29:01.527 "bdev_io_cache_size": 256, 00:29:01.527 "bdev_auto_examine": true, 00:29:01.527 "iobuf_small_cache_size": 128, 00:29:01.527 "iobuf_large_cache_size": 16 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "bdev_raid_set_options", 00:29:01.527 "params": { 00:29:01.527 "process_window_size_kb": 1024 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "bdev_iscsi_set_options", 00:29:01.527 "params": { 00:29:01.527 "timeout_sec": 30 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "bdev_nvme_set_options", 00:29:01.527 "params": { 00:29:01.527 "action_on_timeout": "none", 00:29:01.527 "timeout_us": 0, 00:29:01.527 "timeout_admin_us": 0, 00:29:01.527 "keep_alive_timeout_ms": 10000, 00:29:01.527 "arbitration_burst": 0, 00:29:01.527 "low_priority_weight": 0, 00:29:01.527 "medium_priority_weight": 0, 00:29:01.527 "high_priority_weight": 0, 00:29:01.527 "nvme_adminq_poll_period_us": 10000, 00:29:01.527 "nvme_ioq_poll_period_us": 0, 00:29:01.527 "io_queue_requests": 512, 00:29:01.527 "delay_cmd_submit": true, 00:29:01.527 "transport_retry_count": 4, 00:29:01.527 "bdev_retry_count": 3, 00:29:01.527 "transport_ack_timeout": 0, 00:29:01.527 "ctrlr_loss_timeout_sec": 0, 00:29:01.527 "reconnect_delay_sec": 0, 00:29:01.527 "fast_io_fail_timeout_sec": 0, 00:29:01.527 "disable_auto_failback": false, 00:29:01.527 "generate_uuids": false, 00:29:01.527 "transport_tos": 0, 00:29:01.527 "nvme_error_stat": false, 00:29:01.527 "rdma_srq_size": 0, 00:29:01.527 "io_path_stat": false, 00:29:01.527 "allow_accel_sequence": false, 00:29:01.527 "rdma_max_cq_size": 0, 00:29:01.527 "rdma_cm_event_timeout_ms": 0, 00:29:01.527 "dhchap_digests": [ 00:29:01.527 "sha256", 00:29:01.527 "sha384", 00:29:01.527 "sha512" 00:29:01.527 ], 00:29:01.527 "dhchap_dhgroups": [ 00:29:01.527 "null", 00:29:01.527 "ffdhe2048", 00:29:01.527 "ffdhe3072", 00:29:01.527 "ffdhe4096", 00:29:01.527 "ffdhe6144", 00:29:01.527 "ffdhe8192" 00:29:01.527 ] 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "bdev_nvme_attach_controller", 00:29:01.527 "params": { 00:29:01.527 "name": "nvme0", 00:29:01.527 "trtype": "TCP", 00:29:01.527 "adrfam": "IPv4", 00:29:01.527 "traddr": "127.0.0.1", 00:29:01.527 "trsvcid": "4420", 00:29:01.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.527 "prchk_reftag": false, 00:29:01.527 "prchk_guard": false, 00:29:01.527 "ctrlr_loss_timeout_sec": 0, 00:29:01.527 "reconnect_delay_sec": 0, 00:29:01.527 "fast_io_fail_timeout_sec": 0, 00:29:01.527 "psk": "key0", 00:29:01.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.527 "hdgst": false, 00:29:01.527 "ddgst": false 00:29:01.527 } 00:29:01.527 }, 00:29:01.527 { 00:29:01.527 "method": "bdev_nvme_set_hotplug", 00:29:01.527 "params": { 00:29:01.527 "period_us": 100000, 00:29:01.527 "enable": false 00:29:01.528 } 00:29:01.528 }, 00:29:01.528 { 00:29:01.528 "method": "bdev_wait_for_examine" 00:29:01.528 } 00:29:01.528 ] 00:29:01.528 }, 00:29:01.528 { 00:29:01.528 "subsystem": "nbd", 00:29:01.528 "config": [] 00:29:01.528 } 00:29:01.528 ] 00:29:01.528 }' 00:29:01.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.528 14:06:59 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:01.528 14:06:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:01.528 [2024-05-15 14:06:59.998525] Starting SPDK v24.05-pre git sha1 c3870302f / DPDK 23.11.0 initialization... 00:29:01.528 [2024-05-15 14:06:59.998593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84135 ] 00:29:01.787 [2024-05-15 14:07:00.139264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.787 [2024-05-15 14:07:00.239352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.046 [2024-05-15 14:07:00.400923] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:02.305 14:07:00 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:02.305 14:07:00 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:02.305 14:07:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:02.305 14:07:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:02.305 14:07:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.565 14:07:01 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:02.565 14:07:01 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:02.565 14:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:02.565 14:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.565 14:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.565 14:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.565 14:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.823 14:07:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:02.823 14:07:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:02.823 14:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.823 14:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.823 14:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.823 14:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.824 14:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.083 14:07:01 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:03.083 14:07:01 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:03.083 14:07:01 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:03.083 14:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:03.343 14:07:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:03.343 14:07:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:03.343 14:07:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CszXc9CBZX /tmp/tmp.7qrmZhJYGi 00:29:03.343 14:07:01 keyring_file -- keyring/file.sh@20 -- # killprocess 84135 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 84135 ']' 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@950 -- # kill -0 84135 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84135 00:29:03.343 killing process with pid 84135 00:29:03.343 Received shutdown signal, test time was about 1.000000 seconds 00:29:03.343 00:29:03.343 Latency(us) 00:29:03.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.343 =================================================================================================================== 00:29:03.343 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84135' 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@965 -- # kill 84135 00:29:03.343 14:07:01 keyring_file -- common/autotest_common.sh@970 -- # wait 84135 00:29:03.612 14:07:01 keyring_file -- keyring/file.sh@21 -- # killprocess 83881 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 83881 ']' 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@950 -- # kill -0 83881 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83881 00:29:03.612 killing process with pid 83881 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83881' 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@965 -- # kill 83881 00:29:03.612 [2024-05-15 14:07:01.941415] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:03.612 [2024-05-15 14:07:01.941451] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:03.612 14:07:01 keyring_file -- common/autotest_common.sh@970 -- # wait 83881 00:29:03.884 ************************************ 00:29:03.884 END TEST keyring_file 00:29:03.884 ************************************ 00:29:03.884 00:29:03.884 real 0m13.102s 00:29:03.884 user 0m30.740s 00:29:03.884 sys 0m3.347s 00:29:03.884 14:07:02 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:03.884 14:07:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:03.884 14:07:02 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:29:03.884 14:07:02 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:03.884 14:07:02 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:03.884 14:07:02 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:03.884 14:07:02 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:03.884 14:07:02 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:03.884 14:07:02 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:03.884 14:07:02 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:03.884 14:07:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:03.884 14:07:02 -- common/autotest_common.sh@10 -- # set +x 00:29:03.884 14:07:02 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:03.884 14:07:02 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:03.884 14:07:02 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:03.884 14:07:02 -- common/autotest_common.sh@10 -- # set +x 00:29:06.417 INFO: APP EXITING 00:29:06.417 INFO: killing all VMs 00:29:06.417 INFO: killing vhost app 00:29:06.417 INFO: EXIT DONE 00:29:06.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:06.985 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:06.985 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:07.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.922 Cleaning 00:29:07.922 Removing: /var/run/dpdk/spdk0/config 00:29:07.922 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:07.922 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:07.922 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:07.922 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:07.922 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:07.922 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:07.922 Removing: /var/run/dpdk/spdk1/config 00:29:07.922 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:07.922 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:07.922 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:07.922 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:07.922 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:07.922 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:07.922 Removing: /var/run/dpdk/spdk2/config 00:29:07.922 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:07.922 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:07.922 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:07.922 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:07.922 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:07.922 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:07.922 Removing: /var/run/dpdk/spdk3/config 00:29:07.922 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:07.922 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:07.922 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:07.922 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:07.923 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:07.923 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:07.923 Removing: /var/run/dpdk/spdk4/config 00:29:07.923 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:07.923 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:07.923 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:07.923 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:07.923 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:07.923 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:07.923 Removing: /dev/shm/nvmf_trace.0 00:29:07.923 Removing: /dev/shm/spdk_tgt_trace.pid58099 00:29:07.923 Removing: /var/run/dpdk/spdk0 00:29:07.923 Removing: /var/run/dpdk/spdk1 00:29:08.181 Removing: /var/run/dpdk/spdk2 00:29:08.181 Removing: /var/run/dpdk/spdk3 00:29:08.181 Removing: /var/run/dpdk/spdk4 00:29:08.181 Removing: /var/run/dpdk/spdk_pid57954 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58099 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58297 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58378 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58406 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58515 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58533 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58651 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58831 00:29:08.181 Removing: /var/run/dpdk/spdk_pid58976 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59036 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59112 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59197 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59269 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59307 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59343 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59404 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59504 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59920 00:29:08.181 Removing: /var/run/dpdk/spdk_pid59972 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60018 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60034 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60095 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60111 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60178 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60189 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60240 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60258 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60298 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60316 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60433 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60463 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60543 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60589 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60619 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60683 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60712 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60751 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60781 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60816 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60850 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60885 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60919 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60954 00:29:08.181 Removing: /var/run/dpdk/spdk_pid60988 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61023 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61057 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61094 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61123 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61163 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61192 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61232 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61264 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61307 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61336 00:29:08.181 Removing: /var/run/dpdk/spdk_pid61378 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61448 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61532 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61841 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61856 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61892 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61906 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61921 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61940 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61954 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61975 00:29:08.440 Removing: /var/run/dpdk/spdk_pid61994 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62007 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62023 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62042 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62061 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62076 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62095 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62110 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62130 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62150 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62158 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62179 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62208 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62223 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62258 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62321 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62345 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62360 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62383 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62398 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62400 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62448 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62462 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62490 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62500 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62509 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62523 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62528 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62543 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62547 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62561 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62591 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62617 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62627 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62655 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62665 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62672 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62713 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62730 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62756 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62764 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62771 00:29:08.440 Removing: /var/run/dpdk/spdk_pid62779 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62788 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62796 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62809 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62811 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62885 00:29:08.441 Removing: /var/run/dpdk/spdk_pid62927 00:29:08.441 Removing: /var/run/dpdk/spdk_pid63026 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63064 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63099 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63119 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63141 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63156 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63187 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63208 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63278 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63294 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63333 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63406 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63456 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63486 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63573 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63621 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63659 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63872 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63974 00:29:08.699 Removing: /var/run/dpdk/spdk_pid63998 00:29:08.699 Removing: /var/run/dpdk/spdk_pid64321 00:29:08.699 Removing: /var/run/dpdk/spdk_pid64354 00:29:08.699 Removing: /var/run/dpdk/spdk_pid64642 00:29:08.699 Removing: /var/run/dpdk/spdk_pid65037 00:29:08.699 Removing: /var/run/dpdk/spdk_pid65299 00:29:08.699 Removing: /var/run/dpdk/spdk_pid66070 00:29:08.699 Removing: /var/run/dpdk/spdk_pid66884 00:29:08.699 Removing: /var/run/dpdk/spdk_pid66995 00:29:08.699 Removing: /var/run/dpdk/spdk_pid67068 00:29:08.699 Removing: /var/run/dpdk/spdk_pid68327 00:29:08.699 Removing: /var/run/dpdk/spdk_pid68529 00:29:08.699 Removing: /var/run/dpdk/spdk_pid71430 00:29:08.699 Removing: /var/run/dpdk/spdk_pid71727 00:29:08.699 Removing: /var/run/dpdk/spdk_pid71835 00:29:08.699 Removing: /var/run/dpdk/spdk_pid71963 00:29:08.699 Removing: /var/run/dpdk/spdk_pid71985 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72018 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72040 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72127 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72261 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72406 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72485 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72674 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72758 00:29:08.699 Removing: /var/run/dpdk/spdk_pid72845 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73147 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73527 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73529 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73801 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73815 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73839 00:29:08.699 Removing: /var/run/dpdk/spdk_pid73865 00:29:08.700 Removing: /var/run/dpdk/spdk_pid73870 00:29:08.700 Removing: /var/run/dpdk/spdk_pid74157 00:29:08.700 Removing: /var/run/dpdk/spdk_pid74200 00:29:08.700 Removing: /var/run/dpdk/spdk_pid74482 00:29:08.700 Removing: /var/run/dpdk/spdk_pid74678 00:29:08.700 Removing: /var/run/dpdk/spdk_pid75051 00:29:08.700 Removing: /var/run/dpdk/spdk_pid75561 00:29:09.025 Removing: /var/run/dpdk/spdk_pid76325 00:29:09.025 Removing: /var/run/dpdk/spdk_pid76921 00:29:09.025 Removing: /var/run/dpdk/spdk_pid76930 00:29:09.025 Removing: /var/run/dpdk/spdk_pid78815 00:29:09.025 Removing: /var/run/dpdk/spdk_pid78877 00:29:09.025 Removing: /var/run/dpdk/spdk_pid78932 00:29:09.025 Removing: /var/run/dpdk/spdk_pid78992 00:29:09.025 Removing: /var/run/dpdk/spdk_pid79106 00:29:09.025 Removing: /var/run/dpdk/spdk_pid79162 00:29:09.025 Removing: /var/run/dpdk/spdk_pid79217 00:29:09.025 Removing: /var/run/dpdk/spdk_pid79279 00:29:09.025 Removing: /var/run/dpdk/spdk_pid79592 00:29:09.025 Removing: /var/run/dpdk/spdk_pid80745 00:29:09.025 Removing: /var/run/dpdk/spdk_pid80885 00:29:09.025 Removing: /var/run/dpdk/spdk_pid81127 00:29:09.025 Removing: /var/run/dpdk/spdk_pid81679 00:29:09.025 Removing: /var/run/dpdk/spdk_pid81838 00:29:09.025 Removing: /var/run/dpdk/spdk_pid82001 00:29:09.025 Removing: /var/run/dpdk/spdk_pid82098 00:29:09.025 Removing: /var/run/dpdk/spdk_pid82262 00:29:09.025 Removing: /var/run/dpdk/spdk_pid82375 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83045 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83076 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83111 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83375 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83408 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83444 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83881 00:29:09.025 Removing: /var/run/dpdk/spdk_pid83903 00:29:09.025 Removing: /var/run/dpdk/spdk_pid84135 00:29:09.025 Clean 00:29:09.025 14:07:07 -- common/autotest_common.sh@1447 -- # return 0 00:29:09.025 14:07:07 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:09.025 14:07:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.025 14:07:07 -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 14:07:07 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:09.025 14:07:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.025 14:07:07 -- common/autotest_common.sh@10 -- # set +x 00:29:09.025 14:07:07 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:09.283 14:07:07 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:09.283 14:07:07 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:09.283 14:07:07 -- spdk/autotest.sh@387 -- # hash lcov 00:29:09.283 14:07:07 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:09.283 14:07:07 -- spdk/autotest.sh@389 -- # hostname 00:29:09.283 14:07:07 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:09.283 geninfo: WARNING: invalid characters removed from testname! 00:29:35.830 14:07:31 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:35.830 14:07:34 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:37.738 14:07:36 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:40.272 14:07:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:42.176 14:07:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.082 14:07:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.072 14:07:44 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:46.331 14:07:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:46.331 14:07:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:46.331 14:07:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.331 14:07:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.331 14:07:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.331 14:07:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.331 14:07:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.331 14:07:44 -- paths/export.sh@5 -- $ export PATH 00:29:46.331 14:07:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.331 14:07:44 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:46.331 14:07:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:46.331 14:07:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715782064.XXXXXX 00:29:46.331 14:07:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715782064.MbfsrS 00:29:46.331 14:07:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:46.331 14:07:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:46.331 14:07:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:46.331 14:07:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:46.331 14:07:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:46.331 14:07:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:46.331 14:07:44 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:46.331 14:07:44 -- common/autotest_common.sh@10 -- $ set +x 00:29:46.331 14:07:44 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:29:46.331 14:07:44 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:46.331 14:07:44 -- pm/common@17 -- $ local monitor 00:29:46.331 14:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:46.331 14:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:46.331 14:07:44 -- pm/common@25 -- $ sleep 1 00:29:46.331 14:07:44 -- pm/common@21 -- $ date +%s 00:29:46.331 14:07:44 -- pm/common@21 -- $ date +%s 00:29:46.331 14:07:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715782064 00:29:46.331 14:07:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715782064 00:29:46.331 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715782064_collect-vmstat.pm.log 00:29:46.331 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715782064_collect-cpu-load.pm.log 00:29:47.268 14:07:45 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:47.268 14:07:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:47.268 14:07:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:47.268 14:07:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:47.268 14:07:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:47.268 14:07:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:47.268 14:07:45 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:47.268 14:07:45 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:47.268 14:07:45 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:47.268 14:07:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:47.268 14:07:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:47.268 14:07:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:47.268 14:07:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:47.268 14:07:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:47.268 14:07:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:47.268 14:07:45 -- pm/common@44 -- $ pid=85874 00:29:47.268 14:07:45 -- pm/common@50 -- $ kill -TERM 85874 00:29:47.268 14:07:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:47.268 14:07:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:47.268 14:07:45 -- pm/common@44 -- $ pid=85876 00:29:47.268 14:07:45 -- pm/common@50 -- $ kill -TERM 85876 00:29:47.268 + [[ -n 5097 ]] 00:29:47.268 + sudo kill 5097 00:29:47.278 [Pipeline] } 00:29:47.297 [Pipeline] // timeout 00:29:47.302 [Pipeline] } 00:29:47.320 [Pipeline] // stage 00:29:47.325 [Pipeline] } 00:29:47.344 [Pipeline] // catchError 00:29:47.353 [Pipeline] stage 00:29:47.355 [Pipeline] { (Stop VM) 00:29:47.370 [Pipeline] sh 00:29:47.655 + vagrant halt 00:29:50.962 ==> default: Halting domain... 00:29:57.532 [Pipeline] sh 00:29:57.811 + vagrant destroy -f 00:30:01.117 ==> default: Removing domain... 00:30:01.128 [Pipeline] sh 00:30:01.409 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:01.418 [Pipeline] } 00:30:01.436 [Pipeline] // stage 00:30:01.442 [Pipeline] } 00:30:01.459 [Pipeline] // dir 00:30:01.465 [Pipeline] } 00:30:01.486 [Pipeline] // wrap 00:30:01.492 [Pipeline] } 00:30:01.508 [Pipeline] // catchError 00:30:01.518 [Pipeline] stage 00:30:01.519 [Pipeline] { (Epilogue) 00:30:01.535 [Pipeline] sh 00:30:01.818 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:07.103 [Pipeline] catchError 00:30:07.105 [Pipeline] { 00:30:07.121 [Pipeline] sh 00:30:07.421 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:07.421 Artifacts sizes are good 00:30:07.430 [Pipeline] } 00:30:07.446 [Pipeline] // catchError 00:30:07.456 [Pipeline] archiveArtifacts 00:30:07.462 Archiving artifacts 00:30:07.605 [Pipeline] cleanWs 00:30:07.617 [WS-CLEANUP] Deleting project workspace... 00:30:07.617 [WS-CLEANUP] Deferred wipeout is used... 00:30:07.654 [WS-CLEANUP] done 00:30:07.656 [Pipeline] } 00:30:07.674 [Pipeline] // stage 00:30:07.680 [Pipeline] } 00:30:07.696 [Pipeline] // node 00:30:07.701 [Pipeline] End of Pipeline 00:30:07.737 Finished: SUCCESS